栈式自编码算法

From Ufldl

Jump to: navigation, search
(Concrete example)
(Discussion)
Line 172: Line 172:
Further, it often captures a useful "hierarchical grouping" or "part-whole decomposition" of the input.  To see this, recall that an autoencoder tends to learn features that form a good representation of its input. The first layer of a stacked autoencoder tends to learn first-order features in the raw input (such as edges in an image). The second layer of a stacked autoencoder tends to learn second-order features corresponding to patterns in the appearance of first-order features (e.g., in terms of what edges tend to occur together--for example, to form contour or corner detectors). Higher layers of the stacked autoencoder tend to learn even higher-order features.  
Further, it often captures a useful "hierarchical grouping" or "part-whole decomposition" of the input.  To see this, recall that an autoencoder tends to learn features that form a good representation of its input. The first layer of a stacked autoencoder tends to learn first-order features in the raw input (such as edges in an image). The second layer of a stacked autoencoder tends to learn second-order features corresponding to patterns in the appearance of first-order features (e.g., in terms of what edges tend to occur together--for example, to form contour or corner detectors). Higher layers of the stacked autoencoder tend to learn even higher-order features.  
 +
 +
【初译】
 +
栈式自编码神经网络,包容了任何有表现力的深度网络的全部优点(广告词)。更进一步,它还常常会捕获输入中一些有用的现象,如“层次分组”或者“部分-整体分解”。为啥这样说呢?回顾一下,所有的自编码网络都倾向于学习能更好的表示输入数据的特征。栈式网络的第一层会学好最基础的特征,(比如图片里的边缘)。第二层会学好二级特征,对应基础特征中的各种模式(比如什么样的边缘通常会扎堆出现—例如形成连通域的或者角点的)。栈式网络的更高层还会学到更高层次的特征。
 +
 +
【一审】
 +
栈式自编码神经网络具有任何有着强大表示能力的深度神经网络的所有优点。
 +
更进一步,它通常能够获取到输入的“层次型分组”或者“部分-整体分解”结构。为了弄清这一点,回顾一下,自编码器倾向于学习得到能更好地表示输入数据的特征。因此,栈式自编码神经网络的第一层会学习得到原始输入的一阶特征(比如图片里的边缘),第二层会学习得到二阶特征,其对应一阶特征的呈现模式(比如在构成轮廓或者角点时,什么样的边缘会共现)。栈式自编码神经网络的更高层还会学到更高阶的特征。

Revision as of 12:29, 8 March 2013

Personal tools