Deriving gradients using the backpropagation idea

From Ufldl

Jump to: navigation, search
(Introduction)
(Example 2: Smoothed topographic L1 sparsity penalty in sparse coding)
Line 57: Line 57:
Recall the smoothed topographic L1 sparsity penalty on <math>s</math> in sparse coding:
Recall the smoothed topographic L1 sparsity penalty on <math>s</math> in sparse coding:
:<math>\sum{ \sqrt{Vss^T + \epsilon} }</math>
:<math>\sum{ \sqrt{Vss^T + \epsilon} }</math>
 +
where <math>V</math> is the grouping matrix, <math>s</math> is the feature matrix and <math>\epsilon</math> is a constant.
We would like to find <math>\nabla_s \sum{ \sqrt{Vss^T + \epsilon} }</math>. As above, let's see this term as an instantiation of a neural network:
We would like to find <math>\nabla_s \sum{ \sqrt{Vss^T + \epsilon} }</math>. As above, let's see this term as an instantiation of a neural network:
[[File:Backpropagation Method Example 2.png | 600px]]
[[File:Backpropagation Method Example 2.png | 600px]]

Revision as of 08:08, 29 May 2011

Personal tools