Fine-tuning Stacked AEs

From Ufldl

Jump to: navigation, search
(Blanked the page)
Line 1: Line 1:
 +
=== Introduction ===
 +
Fine tuning is a strategy that is commonly used to reduce the run time of a stacked autoencoder. It involves viewing all layers of a stacked autoencoder as a single model.
 +
=== Strategy ===
 +
Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using back propagation [LINK], as discussed in the sparse autoencoder section.

Revision as of 21:45, 21 April 2011

Personal tools