Sparse Coding: Autoencoder Interpretation

From Ufldl

Jump to: navigation, search
 
Line 35: Line 35:
</math>
</math>
-
(note that the third term, <math>\lVert A \rVert_2^2</math> is the sum of squares of the entries of A, or <math>\sum_r{\sum_c{A_{rc}^2}}</math>)
+
(note that the third term, <math>\lVert A \rVert_2^2</math> is simply the sum of squares of the entries of A, or <math>\sum_r{\sum_c{A_{rc}^2}}</math>)
-
This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use <math>\sqrt{x + \epsilon}</math> in place of <math>\left| x \right|</math>, where <math>\epsilon</math> is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when <math>\epsilon</math> is large compared to <math>x</math>, the <math>x + \epsilon</math> is dominated by <math>\epsilon</math>, and taking the square root yields approximately <math>\sqrt{\epsilon}</math>). This "smoothing" will come in handy later when considering topographic sparse coding below.  
+
This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use <math>\sqrt{x^2 + \epsilon}</math> in place of <math>\left| x \right|</math>, where <math>\epsilon</math> is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when <math>\epsilon</math> is large compared to <math>x</math>, the <math>x + \epsilon</math> is dominated by <math>\epsilon</math>, and taking the square root yields approximately <math>\sqrt{\epsilon}</math>). This "smoothing" will come in handy later when considering topographic sparse coding below.  
Our final objective function is hence:
Our final objective function is hence:
Line 47: Line 47:
(where <math>\sqrt{s^2 + \epsilon}</math> is shorthand for <math>\sum_k{\sqrt{s_k^2 + \epsilon}}</math>)
(where <math>\sqrt{s^2 + \epsilon}</math> is shorthand for <math>\sum_k{\sqrt{s_k^2 + \epsilon}}</math>)
-
Optimizing for this objective function (using the iterative method of optimizing for <math>A</math>, then <math>s</math>, alternately) will yield features (the basis vectors of <math>A</math>) similar to those learned using the sparse autoencoder. For more practical tips on implementing sparse coding, you may wish to refer to [[Exercise:Sparse Coding | the sparse coding exercise]]. For assistance with deriving the gradients, you may wish to refer to [[Deriving gradients using the backpropagation idea]].
+
This objective function can then be optimized iteratively, using the following procedure:
 +
<ol>
 +
<li>Initialize <math>A</math> randomly
 +
<li>Repeat until convergence
 +
  <ol>
 +
    <li>Find the <math>s</math> that minimizes <math>J(A, s)</math> for the <math>A</math> found in the previous step
 +
    <li>Solve for the <math>A</math> that minimizes <math>J(A, s)</math> for the <math>s</math> found in the previous step
 +
  </ol>
 +
</ol>
 +
 
 +
Observe that with our modified objective function, the objective function <math>J(A, s)</math> given <math>s</math>, that is <math>J(A; s) = \lVert As - x \rVert_2^2 + \gamma \lVert A \rVert_2^2</math> (the L1 term in <math>s</math> can be omitted since it is not a function of <math>A</math>) is simply a quadratic term in <math>A</math>, and hence has an easily derivable analytic solution in <math>A</math>. A quick way to derive this solution would be to use matrix calculus - some pages about matrix calculus can be found in the [[Useful Links | useful links]] section. Unfortunately, the objective function given <math>A</math> does not have a similarly nice analytic solution, so that minimization step will have to be carried out using gradient descent or similar optimization methods.
 +
 
 +
In theory, optimizing for this objective function using the iterative method as above should (eventually) yield features (the basis vectors of <math>A</math>) similar to those learned using the sparse autoencoder. However, in practice, there are quite a few tricks required for better convergence of the algorithm, and these tricks are described in greater detail in the later section on [[ Sparse Coding: Autoencoder Interpretation#Sparse coding in practice | sparse coding in practice]]. Deriving the gradients for the objective function may be slightly tricky as well, and using matrix calculus or [[Deriving gradients using the backpropagation idea | using the backpropagation intuition]] can be helpful.
== Topographic sparse coding ==
== Topographic sparse coding ==
Line 55: Line 67:
Inspired by this example, we would like to learn features which are similarly "topographically ordered". What does this imply for our learned features? Intuitively, if "adjacent" features are "similar", we would expect that if one feature is activated, its neighbors will also be activated to a lesser extent.  
Inspired by this example, we would like to learn features which are similarly "topographically ordered". What does this imply for our learned features? Intuitively, if "adjacent" features are "similar", we would expect that if one feature is activated, its neighbors will also be activated to a lesser extent.  
-
Concretely, suppose we (arbitrarily) organized our features into a square matrix. We would then like adjacent features in the matrix to similar. The way this is accomplished is to group these adjacent features together in the smoothed L1 penalty, so that instead of say <math>\sqrt{s_{1,1}^2 + \epsilon}</math>, we use say <math>\sqrt{s_{1,1}^2 + s_{1,2}^2 + s_{1,3}^2 + s_{2,1}^2 + s_{2,2}^2 + s_{3,2}^2 + s_{3,1}^2 + s_{3,2}^2 + s_{3,3}^2 + \epsilon}</math> instead, if we group in 3x3 regions. The grouping is usually overlapping, so that the 3x3 region starting at the 1st row and 1st column is one group, the 3x3 region starting at the 1st row and 2nd column is another group, and so on. Further, the grouping is also usually done wrapping around, as if the matrix were a torus, so that every feature is counted an equal number of times.
+
Concretely, suppose we (arbitrarily) organized our features into a square matrix. We would then like adjacent features in the matrix to be similar. The way this is accomplished is to group these adjacent features together in the smoothed L1 penalty, so that instead of say <math>\sqrt{s_{1,1}^2 + \epsilon}</math>, we use say <math>\sqrt{s_{1,1}^2 + s_{1,2}^2 + s_{1,3}^2 + s_{2,1}^2 + s_{2,2}^2 + s_{3,2}^2 + s_{3,1}^2 + s_{3,2}^2 + s_{3,3}^2 + \epsilon}</math> instead, if we group in 3x3 regions. The grouping is usually overlapping, so that the 3x3 region starting at the 1st row and 1st column is one group, the 3x3 region starting at the 1st row and 2nd column is another group, and so on. Further, the grouping is also usually done wrapping around, as if the matrix were a torus, so that every feature is counted an equal number of times.
Hence, in place of the smoothed L1 penalty, we use the sum of smoothed L1 penalties over all the groups, so our new objective function is:
Hence, in place of the smoothed L1 penalty, we use the sum of smoothed L1 penalties over all the groups, so our new objective function is:
Line 72: Line 84:
This objective function can be optimized using the iterated method described in the earlier section. Topographic sparse coding will learn features similar to those learned by sparse coding, except that the features will now be "ordered" in some way.
This objective function can be optimized using the iterated method described in the earlier section. Topographic sparse coding will learn features similar to those learned by sparse coding, except that the features will now be "ordered" in some way.
 +
 +
== Sparse coding in practice ==
 +
 +
As suggested in the earlier sections, while the theory behind sparse coding is quite simple, writing a good implementation that actually works and converges reasonably quickly to good optima requires a bit of finesse.
 +
 +
Recall the simple iterative algorithm proposed earlier:
 +
<ol>
 +
<li>Initialize <math>A</math> randomly
 +
<li>Repeat until convergence
 +
  <ol>
 +
    <li>Find the <math>s</math> that minimizes <math>J(A, s)</math> for the <math>A</math> found in the previous step
 +
    <li>Solve for the <math>A</math> that minimizes <math>J(A, s)</math> for the <math>s</math> found in the previous step
 +
  </ol>
 +
</ol>
 +
 +
It turns out that running this algorithm out of the box will not produce very good results, if any results are produced at all. There are two main tricks to achieve faster and better convergence:
 +
<ol>
 +
<li>Batching examples into "mini-batches"
 +
<li>Good initialization of <math>s</math>
 +
</ol>
 +
 +
=== Batching examples into mini-batches ===
 +
 +
If you try running the simple iterative algorithm on a large dataset of say 10 000 patches at one go, you will find that each iteration takes a long time, and the algorithm may hence take a long time to converge. To increase the rate of convergence, you can instead run the algorithm on mini-batches instead. To do this, instead of running the algorithm on all 10 000 patches, in each iteration, select a mini-batch - a (different) random subset of say 2000 patches from the 10 000 patches - and run the algorithm on that mini-batch for the iteration instead. This accomplishes two things - firstly, it speeds up each iteration, since now each iteration is operating on 2000 rather than 10 000 patches; secondly, and more importantly, it increases the rate of convergence [[(TODO]]: explain why).
 +
 +
=== Good initialization of <math>s</math> ===
 +
 +
Another important trick in obtaining faster and better convergence is good initialization of the feature matrix <math>s</math> before using gradient descent (or other methods) to optimize for the objective function for <math>s</math> given <math>A</math>. In practice, initializing <math>s</math> randomly at each iteration can result in poor convergence unless a good optima is found for <math>s</math> before moving on to optimize for <math>A</math>. A better way to initialize <math>s</math> is the following:
 +
<ol>
 +
<li>Set <math>s \leftarrow W^Tx</math> (where <math>x</math> is the matrix of patches in the mini-batch)
 +
<li>For each feature in <math>s</math> (i.e. each column of <math>s</math>), divide the feature by the norm of the corresponding basis vector in <math>A</math>. That is, if <math>s_{r, c}</math> is the <math>r</math>th feature for the <math>c</math>th example, and <math>A_c</math> is the <math>c</math>th basis vector in <math>A</math>, then set <math>s_{r, c} \leftarrow \frac{ s_{r, c} } { \lVert A_c \rVert }.</math>
 +
</ol>
 +
 +
Very roughly and informally speaking, this initialization helps because the first step is an attempt to find a good <math>s</math> such that <math>Ws \approx x</math>, and the second step "normalizes" <math>s</math> in an attempt to keep the sparsity penalty small. It turns out that initializing <math>s</math> using only one but not both steps results in poor performance in practice. ([[TODO]]: a better explanation for why this initialization helps?)
 +
 +
=== The practical algorithm ===
 +
 +
With the above two tricks, the algorithm for sparse coding then becomes:
 +
<ol>
 +
<li>Initialize <math>A</math> randomly
 +
<li>Repeat until convergence
 +
  <ol>
 +
    <li>Select a random mini-batch of 2000 patches
 +
    <li>Initialize <math>s</math> as described above
 +
    <li>Find the <math>s</math> that minimizes <math>J(A, s)</math> for the <math>A</math> found in the previous step
 +
    <li>Solve for the <math>A</math> that minimizes <math>J(A, s)</math> for the <math>s</math> found in the previous step
 +
  </ol>
 +
</ol>
 +
 +
With this method, you should be able to reach a good local optima relatively quickly.
 +
 +
 +
 +
{{Languages|稀疏编码自编码表达|中文}}

Latest revision as of 02:49, 19 April 2013

Personal tools