从自我学习到深层网络

From Ufldl

Jump to: navigation, search
Line 24: Line 24:
【一审】
【一审】
在自我学习中,我们首先利用未标注数据训练一个稀疏自动编码器。随后,给定一个新样本<math>x</math>,我们通过隐含层提取出特征<math>a</math>。上述过程图示如下:
在自我学习中,我们首先利用未标注数据训练一个稀疏自动编码器。随后,给定一个新样本<math>x</math>,我们通过隐含层提取出特征<math>a</math>。上述过程图示如下:
 +
 +
 +
【原文】
 +
We are interested in solving a classification task, where our goal is to predict labels <math>\textstyle y</math>.  We have a labeled training set <math>\textstyle \{ (x_l^{(1)}, y^{(1)}), (x_l^{(2)}, y^{(2)}), \ldots (x_l^{(m_l)},y^{(m_l)}) \}</math> of <math>\textstyle m_l</math> labeled examples. We showed previously that we can replace the original features <math>\textstyle x^{(i)}</math> with features <math>\textstyle a^{(l)}</math> computed by the sparse autoencoder (the "replacement" representation).  This gives us a training set <math>\textstyle \{(a^{(1)},y^{(1)}), \ldots (a^{(m_l)}, y^{(m_l)}) \}</math>.  Finally, we train a logistic classifier to map from the features <math>\textstyle a^{(i)}</math> to the classification label <math>\textstyle y^{(i)}</math>. To illustrate this step, similar to [[Neural Networks|our earlier notes]], we can draw our logistic regression unit (shown in orange) as follows:

Revision as of 07:11, 13 March 2013

Personal tools