栈式自编码算法

From Ufldl

Jump to: navigation, search
(Created page with "http://deeplearning.stanford.edu/wiki/index.php/")
Line 1: Line 1:
-
http://deeplearning.stanford.edu/wiki/index.php/
+
初译@小猴机器人
 +
一审@邓亚峰-人脸识别
 +
 
 +
===Overview===
 +
 
 +
The greedy layerwise approach for pretraining a deep network works by training each layer in turn. In this page, you will find out how autoencoders can be "stacked" in a greedy layerwise fashion for pretraining (initializing) the weights of a deep network.
 +
 
 +
【初译】
 +
在深度神经网络进行预先训练的时候,有一种贪心分层方法,它的原理基于对每一个层次轮流进行训练。在本文中,我们会一起学习如何将上文中提到的自编码神经网络以栈的形式组成这种贪心分层的模样,从而预先训练(或者说初始化)深度神经网络的权重。
 +
 
 +
【一审】
 +
可以采用依次训练每一层的贪心分层算法来预训练深度神经网络。在本节中,我们将会学习如何将自编码器以贪心分层的方式栈化,从而预训练(或者说初始化)深度神经网络的权重。
 +
 
 +
 
 +
A stacked autoencoder is a neural network consisting of multiple layers of sparse autoencoders in which the outputs of each layer is wired to the inputs of the successive layer. Formally, consider a stacked autoencoder with n layers. Using notation from the autoencoder section, let <math>W^{(k, 1)}, W^{(k, 2)}, b^{(k, 1)}, b^{(k, 2)}</math> denote the parameters <math>W^{(1)}, W^{(2)}, b^{(1)}, b^{(2)}</math> for kth autoencoder. Then the encoding step for the stacked autoencoder is given by running the encoding step of each layer in forward order:
 +
 
 +
<math>
 +
\begin{align}
 +
a^{(l)} = f(z^{(l)}) \\
 +
z^{(l + 1)} = W^{(l, 1)}a^{(l)} + b^{(l, 1)}
 +
\end{align}
 +
</math>
 +
 
 +
The decoding step is given by running the decoding stack of each autoencoder in reverse order:
 +
 
 +
<math>
 +
\begin{align}
 +
a^{(n + l)} = f(z^{(n + l)}) \\
 +
z^{(n + l + 1)} = W^{(n - l, 2)}a^{(n + l)} + b^{(n - l, 2)}
 +
\end{align}
 +
</math>
 +
 
 +
The information of interest is contained within <math>a^{(n)}</math>, which is the activation of the deepest layer of hidden units. This vector gives us a representation of the input in terms of higher-order features.  
 +
 
 +
【初译】
 +
栈式自编码神经网络指的是这样一种网络,它由多层稀疏自编码神经网络组成,前一层网络的输出作为后一层的输入。形式化来讲,让我们来设想一个n层栈式自编码神经网络,并沿用自编码神经网络那章的各种符号,比如<math>W^{(k, 1)}, W^{(k, 2)}, b^{(k, 1)}, b^{(k, 2)}</math> 表示第k层网络里对应的<math>W^{(1)}, W^{(2)}, b^{(1)}, b^{(2)}</math> 参数。那么对于该栈式网络的编码过程就是,按照从前向后的顺序,执行每一层的编码过程:
 +
 
 +
<math>
 +
\begin{align}
 +
a^{(l)} = f(z^{(l)}) \\
 +
z^{(l + 1)} = W^{(l, 1)}a^{(l)} + b^{(l, 1)}
 +
\end{align}
 +
</math>
 +
 
 +
解码就是从后向前每层解码咯:
 +
 
 +
<math>
 +
\begin{align}
 +
a^{(n + l)} = f(z^{(n + l)}) \\
 +
z^{(n + l + 1)} = W^{(n - l, 2)}a^{(n + l)} + b^{(n - l, 2)}
 +
\end{align}
 +
</math>
 +
 
 +
在这里,我们感兴趣的信息是a(n),也就是最深的隐含单元层的激发阈值。这个向量对我们的输入给出更高层次的特征表示。
 +
 
 +
【一审】
 +
栈式自编码神经网络是一个由多层稀疏自编码器组成的神经网络,其前一层自编码器的输出作为其后一层自编码器的输入。形式上,对于一个n层栈式自编码神经网络而言,沿用自编码器一章的各种符号,假定用<math>W^{(k, 1)}, W^{(k, 2)}, b^{(k, 1)}, b^{(k, 2)}</math> 表示第k个自编码器对应的<math>W^{(1)}, W^{(2)}, b^{(1)}, b^{(2)}</math> 参数,那么该栈式自编码神经网络的编码过程就是依照从前向后的顺序执行每一层自编码器的编码步骤:
 +
 
 +
<math>
 +
\begin{align}
 +
a^{(l)} = f(z^{(l)}) \\
 +
z^{(l + 1)} = W^{(l, 1)}a^{(l)} + b^{(l, 1)}
 +
\end{align}
 +
</math>
 +
 
 +
同理,栈式神经网络的解码过程就是依照从后向前的顺序执行每一层自编码器的解码步骤:
 +
 
 +
<math>
 +
\begin{align}
 +
a^{(n + l)} = f(z^{(n + l)}) \\
 +
z^{(n + l + 1)} = W^{(n - l, 2)}a^{(n + l)} + b^{(n - l, 2)}
 +
\end{align}
 +
</math>
 +
 
 +
其中,a(n)是隐藏单元最深层的响应,其包含了我们感兴趣的信息,这个向量是对输入的更高阶的表示。
 +
 
 +
The features from the stacked autoencoder can be used for classification problems by feeding <math>a(n)</math> to a softmax classifier.
 +
 
 +
【初译】
 +
从栈式自编码神经网络中学到的特征,可以通过向softmax分类器添加 <math>a(n)</math>的方式来解决分类问题。
 +
 
 +
【一审】
 +
通过将 <math>a(n)</math>作为softmax分类器的输入特征,可以将栈式自编码神经网络中学到的特征用于分类问题。
 +
 
 +
 
 +
===Training===
 +
A good way to obtain good parameters for a stacked autoencoder is to use greedy layer-wise training. To do this, first train the first layer on raw input to obtain parameters <math>W^{(1,1)}, W^{(1,2)}, b^{(1,1)}, b^{(1,2)}</math>. Use the first layer to transform the raw input into a vector consisting of activation of the hidden units, A. Train the second layer on this vector to obtain parameters <math>W^{(2,1)}, W^{(2,2)}, b^{(2,1)}, b^{(2,2)}</math>. Repeat for subsequent layers, using the output of each layer as input for the subsequent layer.
 +
 
 +
This method trains the parameters of each layer individually while freezing parameters for the remainder of the model. To produce better results, after this phase of training is complete, [[Fine-tuning Stacked AEs | fine-tuning]] using backpropagation can be used to improve the results by tuning the parameters of all layers are changed at the same time.
 +
 
 +
<!-- In practice, fine-tuning should be use when the parameters have been brought close to convergence through layer-wise training. Attempting to use fine-tuning with the weights initialized randomly will lead to poor results due to local optima. -->
 +
 
 +
{{Quote|
 +
If one is only interested in finetuning for the purposes of classification, the common practice is to then discard the "decoding" layers of the stacked autoencoder and link the last hidden layer <math>a^{(n)}</math> to the softmax classifier. The gradients from the (softmax) classification error will then be backpropagated into the encoding layers.
 +
}}
 +
 
 +
===Concrete example===
 +
 
 +
To give a concrete example, suppose you wished to train a stacked autoencoder with 2 hidden layers for classification of MNIST digits, as you will be doing in [[Exercise: Implement deep networks for digit classification | the next exercise]].
 +
 
 +
First, you would train a sparse autoencoder on the raw inputs <math>x^{(k)}</math> to learn primary features <math>h^{(1)(k)}</math> on the raw input.
 +
 
 +
[[File:Stacked_SparseAE_Features1.png|400px]]
 +
 
 +
Next, you would feed the raw input into this trained sparse autoencoder, obtaining the primary feature activations <math>h^{(1)(k)}</math> for each of the inputs <math>x^{(k)}</math>. You would then use these primary features as the "raw input" to another sparse autoencoder to learn secondary features <math>h^{(2)(k)}</math> on these primary features.
 +
 
 +
[[File:Stacked_SparseAE_Features2.png|400px]]
 +
 
 +
Following this, you would feed the primary features into the second sparse autoencoder to obtain the secondary feature activations <math>h^{(2)(k)}</math> for each of the primary features <math>h^{(1)(k)}</math> (which correspond to the primary features of the corresponding inputs <math>x^{(k)}</math>). You would then treat these secondary features as "raw input" to a softmax classifier, training it to map secondary features to digit labels.
 +
 
 +
[[File:Stacked_Softmax_Classifier.png|400px]]
 +
 
 +
Finally, you would combine all three layers together to form a stacked autoencoder with 2 hidden layers and a final softmax classifier layer capable of classifying the MNIST digits as desired.
 +
 
 +
[[File:Stacked_Combined.png|500px]]
 +
 
 +
===Discussion===
 +
 
 +
A stacked autoencoder enjoys all the benefits of any deep network of greater expressive power. 
 +
 
 +
Further, it often captures a useful "hierarchical grouping" or "part-whole decomposition" of the input.  To see this, recall that an autoencoder tends to learn features that form a good representation of its input. The first layer of a stacked autoencoder tends to learn first-order features in the raw input (such as edges in an image). The second layer of a stacked autoencoder tends to learn second-order features corresponding to patterns in the appearance of first-order features (e.g., in terms of what edges tend to occur together--for example, to form contour or corner detectors). Higher layers of the stacked autoencoder tend to learn even higher-order features.
 +
 
 +
 
 +
{{CNN}}
 +
 
 +
<!--
 +
For instance, in the context of image input, the first layers usually learns to recognize edges. The second layer usually learns features that arise from combinations of the edges, such as corners. With certain types of network configuration and input modes, the higher layers can learn meaningful combinations of features. For instance, if the input set consists of images of faces, higher layers may learn features corresponding to parts of the face such as eyes, noses or mouths.
 +
!-->

Revision as of 12:19, 8 March 2013

Personal tools