稀疏编码自编码表达

From Ufldl

Jump to: navigation, search
Line 1: Line 1:
-
[原文]
+
== Sparse coding ==
-
== Sparse coding [稀疏编码]==
+
-
In the sparse autoencoder, we tried to learn a set of weights W (and associated biases b) that would give us sparse features σ(Wx + b) useful in reconstructing the input x.
+
In the sparse autoencoder, we tried to learn a set of weights <math>W</math> (and associated biases <math>b</math>) that would give us sparse features <math>\sigma(Wx + b)</math> useful in reconstructing the input <math>x</math>.  
-
[初译]
+
[[File:STL_SparseAE.png | 240px]]
-
稀疏编码
+
Sparse coding can be seen as a modification of the sparse autoencoder method in which we try to learn the set of features for some data "directly". Together with an associated basis  for transforming the learned features from the feature space to the data space, we can then reconstruct the data from the learned features.
-
在稀疏自编码中,为了用稀疏特征σ(Wx + b)重新表示输入数据X需要学习权重系数W(以及对应的偏移量b)。
+
Formally, in sparse coding, we have some data <math>x</math> we would like to learn features on. In particular, we would like to learn <math>s</math>, a set of sparse features useful for representing the data, and <math>A</math>, a basis for transforming the features from the feature space to the data space. Our objective function is hence:
-
[一审]
+
:<math>
 +
J(A, s) = \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1
 +
</math>
-
稀疏编码
+
(If you are unfamiliar with the notation, <math>\lVert x \rVert_k</math> refers to the L<math>k</math> norm of the <math>x</math> which is equal to <math>\left( \sum{ \left| x_i^k \right| } \right) ^{\frac{1}{k}}</math>. The L2 norm is the familiar Euclidean norm, while the L1 norm is the sum of absolute values of the elements of the vector)
-
在稀疏自编码中,为了用稀疏特征σ(Wx + b)重新表示输入数据X需要学习权重系数W(以及对应的偏移量b)。
+
The first term is the error in reconstructing the data from the features using the basis, and the second term is a sparsity penalty term to encourage the learned features to be sparse.
-
[[File:Sparse coding.png | 300px | link=]]
+
However, the objective function as it stands is not properly constrained - it is possible to reduce the sparsity cost (the second term) by scaling <math>A</math> by some constant and scaling <math>s</math> by the inverse of the same constant, without changing the error. Hence, we include the additional constraint that that for every column <math>A_j</math> of <math>A</math>,
 +
<math>A_j^TA_j \le 1</math>. Our problem is thus:
-
[原文]
+
:<math>
 +
\begin{array}{rcl}
 +
    {\rm minimize} & \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1 \\
 +
    {\rm s.t.}    &    A_j^TA_j \le 1 \; \forall j \\
 +
\end{array}
 +
</math>
-
Sparse coding can be seen as a modification of the sparse autoencoder method in which we try to learn the set of features for some data "directly". Together with an associated basis for transforming the learned features from the feature space to the data space, we can then reconstruct the data from the learned features.
+
Unfortunately, the objective function is non-convex, and hence impossible to optimize well using gradient-based methods. However, given <math>A</math>, the problem of finding <math>s</math> that minimizes <math>J(A, s)</math> is convex. Similarly, given <math>s</math>, the problem of finding <math>A</math> that minimizes <math>J(A, s)</math> is also convex. This suggests that we might try alternately optimizing for <math>A</math> for a fixed <math>s</math>, and then optimizing for <math>s</math> given a fixed <math>A</math>. It turns out that this works quite well in practice.
-
[初译]
+
However, the form of our problem presents another difficulty - the constraint that <math>A_j^TA_j \le 1 \; \forall j</math> cannot be enforced using simple gradient-based methods. Hence, in practice, this constraint is weakened to a "weight decay" term designed to keep the entries of <math>A</math> small. This gives us a new objective function:
-
稀疏编码可以看作是稀疏自编码方法的一个变形,该方法试图直接学习数据的特征集。利用偏移量将待学习特征集从特征空间转化到数据空间,实现了待学习特征集数据的重新表示。
+
:<math>
 +
J(A, s) = \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1 + \gamma \lVert A \rVert_2^2
 +
</math>
-
[一审]
+
(note that the third term, <math>\lVert A \rVert_2^2</math> is simply the sum of squares of the entries of A, or <math>\sum_r{\sum_c{A_{rc}^2}}</math>)
-
稀疏编码可以看作是稀疏自编码方法的一个变形,该方法试图直接学习数据的特征集。利用连带基将学习特征集从特征空间转化到数据空间,就可以从学到的特征中重建数据。
+
This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use <math>\sqrt{x + \epsilon}</math> in place of <math>\left| x \right|</math>, where <math>\epsilon</math> is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when <math>\epsilon</math> is large compared to <math>x</math>, the <math>x + \epsilon</math> is dominated by <math>\epsilon</math>, and taking the square root yields approximately <math>\sqrt{\epsilon}</math>). This "smoothing" will come in handy later when considering topographic sparse coding below.
-
[原文]
+
Our final objective function is hence:
-
Formally, in sparse coding, we have some data x we would like to learn features on. In particular, we would like to learn s, a set of sparse features useful for representing the data, and A, a basis for transforming the features from the feature space to the data space. Our objective function is hence:
+
:<math>
 +
J(A, s) = \lVert As - x \rVert_2^2 + \lambda \sqrt{s^2 + \epsilon} + \gamma \lVert A \rVert_2^2
 +
</math>
-
[初译]
+
(where <math>\sqrt{s^2 + \epsilon}</math> is shorthand for <math>\sum_k{\sqrt{s_k^2 + \epsilon}}</math>)
-
在稀疏编码中,通常有很多数据x供我们进行特征学习。例如:s是一个用于表示数据的稀疏特征集,A是特征集从特征空间转换到数据空间的基。因此,为了计算s和A构建如下目标函数:
+
This objective function can then be optimized iteratively, using the following procedure:
 +
<ol>
 +
<li>Initialize <math>A</math> randomly
 +
<li>Repeat until convergence
 +
  <ol>
 +
    <li>Find the <math>s</math> that minimizes <math>J(A, s)</math> for the <math>A</math> found in the previous step
 +
    <li>Solve for the <math>A</math> that minimizes <math>J(A, s)</math> for the <math>s</math> found in the previous step
 +
  </ol>
 +
</ol>
-
[一审]
+
Observe that with our modified objective function, the objective function <math>J(A, s)</math> given <math>s</math>, that is <math>J(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2</math> (the L1 term in <math>s</math> can be omitted since it is not a function of <math>A</math>) is simply a quadratic term in <math>A</math>, and hence has an easily derivable analytic solution in <math>A</math>. A quick way to derive this solution would be to use matrix calculus - some pages about matrix calculus can be found in the [[Useful Links | useful links]] section. Unfortunately, the objective function given <math>A</math> does not have a similarly nice analytic solution, so that minimization step will have to be carried out using gradient descent or similar optimization methods.
-
在稀疏编码中,对于从数据x中进行特征学习的情况。例如学习一个用于表示数据的稀疏特征集s,和一个将特征从特征空间转换到数据空间的基A,我们可以构建如下目标函数:
+
In theory, optimizing for this objective function using the iterative method as above should (eventually) yield features (the basis vectors of <math>A</math>) similar to those learned using the sparse autoencoder. However, in practice, there are quite a few tricks required for better convergence of the algorithm, and these tricks are described in greater detail in the later section on [[ Sparse Coding: Autoencoder Interpretation#Sparse coding in practice | sparse coding in practice]]. Deriving the gradients for the objective function may be slightly tricky as well, and using matrix calculus or [[Deriving gradients using the backpropagation idea | using the backpropagation intuition]] can be helpful.
-
[原文]
+
== Topographic sparse coding ==
-
:<math>J(A, s) = \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1</math>
+
With sparse coding, we can learn a set of features useful for representing the data. However, drawing inspiration from the brain, we would like to learn a set of features that are "orderly" in some manner. For instance, consider visual features. As suggested earlier, the V1 cortex of the brain contains neurons which detect edges at particular orientations. However, these neurons are also organized into hypercolumns in which adjacent neurons detect edges at similar orientations. One neuron could detect a horizontal edge, its neighbors edges oriented slightly off the horizontal, and moving further along the hypercolumn, the neurons detect edges oriented further off the horizontal.
-
(If you are unfamiliar with the notation,<math>\lVert x \rVert_k</math> refers to the Lk norm of the x which is equal to <math>\sum \mid x^k_i\mid ^\frac{1}{k}</math>. The L2 norm is the familiar Euclidean norm, while the L1 norm is the sum of absolute values of the elements of the vector)
+
-
[初译]
+
Inspired by this example, we would like to learn features which are similarly "topographically ordered". What does this imply for our learned features? Intuitively, if "adjacent" features are "similar", we would expect that if one feature is activated, its neighbors will also be activated to a lesser extent.
-
在稀疏编码中,通常有很多数据x供我们进行特征学习。例如:s是一个用于表示数据的稀疏特征集,A是特征集从特征空间转换到数据空间的基。因此,为了计算s和A构建如下目标函数:
+
Concretely, suppose we (arbitrarily) organized our features into a square matrix. We would then like adjacent features in the matrix to be similar. The way this is accomplished is to group these adjacent features together in the smoothed L1 penalty, so that instead of say <math>\sqrt{s_{1,1}^2 + \epsilon}</math>, we use say <math>\sqrt{s_{1,1}^2 + s_{1,2}^2 + s_{1,3}^2 + s_{2,1}^2 + s_{2,2}^2 + s_{3,2}^2 + s_{3,1}^2 + s_{3,2}^2 + s_{3,3}^2 + \epsilon}</math> instead, if we group in 3x3 regions. The grouping is usually overlapping, so that the 3x3 region starting at the 1st row and 1st column is one group, the 3x3 region starting at the 1st row and 2nd column is another group, and so on. Further, the grouping is also usually done wrapping around, as if the matrix were a torus, so that every feature is counted an equal number of times.
-
[一审]
+
Hence, in place of the smoothed L1 penalty, we use the sum of smoothed L1 penalties over all the groups, so our new objective function is:
-
在稀疏编码中,对于从数据x中进行特征学习的情况。例如学习一个用于表示数据的稀疏特征集s,和一个将特征从特征空间转换到数据空间的基A,我们可以构建如下目标函数:
+
:<math>
 +
J(A, s) = \lVert As - x \rVert_2^2 + \lambda \sum_{\text{all groups } g}{\sqrt{ \left( \sum_{\text{all } s \in g}{s^2} \right) + \epsilon} } + \gamma \lVert A \rVert_2^2
 +
</math>
 +
In practice, the "grouping" can be accomplished using a "grouping matrix" <math>V</math>, such that the <math>r</math>th row of <math>V</math> indicates which features are grouped in the <math>r</math>th group, so <math>V_{r, c} = 1</math> if group <math>r</math> contains feature <math>c</math>. Thinking of the grouping as being achieved by a grouping matrix makes the computation of the gradients more intuitive. Using this grouping matrix, the objective function can be rewritten as:
-
:<math>J(A, s) = \lVert As - x \rVert_2^2 + \lambda \lVert s \rVert_1</math>
+
:<math>
 +
J(A, s) = \lVert As - x \rVert_2^2 + \lambda \sum{ \sqrt{Vss^T + \epsilon} } + \gamma \lVert A \rVert_2^2
 +
</math>
 +
 
 +
(where <math>\sum{ \sqrt{Vss^T + \epsilon} }</math> is <math>\sum_r{ \sum_c { D_{r, c} } } </math> if we let <math>D = \sqrt{Vss^T + \epsilon}</math>)
 +
 
 +
This objective function can be optimized using the iterated method described in the earlier section. Topographic sparse coding will learn features similar to those learned by sparse coding, except that the features will now be "ordered" in some way.
 +
 
 +
== Sparse coding in practice ==
 +
 
 +
As suggested in the earlier sections, while the theory behind sparse coding is quite simple, writing a good implementation that actually works and converges reasonably quickly to good optima requires a bit of finesse.
 +
 
 +
Recall the simple iterative algorithm proposed earlier:
 +
<ol>
 +
<li>Initialize <math>A</math> randomly
 +
<li>Repeat until convergence
 +
  <ol>
 +
    <li>Find the <math>s</math> that minimizes <math>J(A, s)</math> for the <math>A</math> found in the previous step
 +
    <li>Solve for the <math>A</math> that minimizes <math>J(A, s)</math> for the <math>s</math> found in the previous step
 +
  </ol>
 +
</ol>
 +
 
 +
It turns out that running this algorithm out of the box will not produce very good results, if any results are produced at all. There are two main tricks to achieve faster and better convergence:
 +
<ol>
 +
<li>Batching examples into "mini-batches"
 +
<li>Good initialization of <math>s</math>
 +
</ol>
 +
 
 +
=== Batching examples into mini-batches ===
 +
 
 +
If you try running the simple iterative algorithm on a large dataset of say 10 000 patches at one go, you will find that each iteration takes a long time, and the algorithm may hence take a long time to converge. To increase the rate of convergence, you can instead run the algorithm on mini-batches instead. To do this, instead of running the algorithm on all 10 000 patches, in each iteration, select a mini-batch - a (different) random subset of say 2000 patches from the 10 000 patches - and run the algorithm on that mini-batch for the iteration instead. This accomplishes two things - firstly, it speeds up each iteration, since now each iteration is operating on 2000 rather than 10 000 patches; secondly, and more importantly, it increases the rate of convergence [[(TODO]]: explain why).
 +
 
 +
=== Good initialization of <math>s</math> ===
 +
 
 +
Another important trick in obtaining faster and better convergence is good initialization of the feature matrix <math>s</math> before using gradient descent (or other methods) to optimize for the objective function for <math>s</math> given <math>A</math>. In practice, initializing <math>s</math> randomly at each iteration can result in poor convergence unless a good optima is found for <math>s</math> before moving on to optimize for <math>A</math>. A better way to initialize <math>s</math> is the following:
 +
<ol>
 +
<li>Set <math>s \leftarrow W^Tx</math> (where <math>x</math> is the matrix of patches in the mini-batch)
 +
<li>For each feature in <math>s</math> (i.e. each column of <math>s</math>), divide the feature by the norm of the corresponding basis vector in <math>A</math>. That is, if <math>s_{r, c}</math> is the <math>r</math>th feature for the <math>c</math>th example, and <math>A_c</math> is the <math>c</math>th basis vector in <math>A</math>, then set <math>s_{r, c} \leftarrow \frac{ s_{r, c} } { \lVert A_c \rVert }.</math>
 +
</ol>
 +
 
 +
Very roughly and informally speaking, this initialization helps because the first step is an attempt to find a good <math>s</math> such that <math>Ws \approx x</math>, and the second step "normalizes" <math>s</math> in an attempt to keep the sparsity penalty small. It turns out that initializing <math>s</math> using only one but not both steps results in poor performance in practice. ([[TODO]]: a better explanation for why this initialization helps?)
 +
 
 +
=== The practical algorithm ===
 +
 
 +
With the above two tricks, the algorithm for sparse coding then becomes:
 +
<ol>
 +
<li>Initialize <math>A</math> randomly
 +
<li>Repeat until convergence
 +
  <ol>
 +
    <li>Select a random mini-batch of 2000 patches
 +
    <li>Initialize <math>s</math> as described above
 +
    <li>Find the <math>s</math> that minimizes <math>J(A, s)</math> for the <math>A</math> found in the previous step
 +
    <li>Solve for the <math>A</math> that minimizes <math>J(A, s)</math> for the <math>s</math> found in the previous step
 +
  </ol>
 +
</ol>
 +
 
 +
With this method, you should be able to reach a good local optima relatively quickly.

Revision as of 07:01, 8 March 2013

Personal tools