Fine-tuning Stacked AEs

From Ufldl

Jump to: navigation, search
(Strategy)
Line 3: Line 3:
=== Strategy ===
=== Strategy ===
-
Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section.
+
Luckily, we already have all the tools necessary to implement fine tuning for stacked autoencoders! In order to compute the gradients for all the layers of the stacked autoencoder in each iteration, we use the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section. As the backpropagation algorithm can be extended to apply for an arbitrary number of layers, we can actually use this algorithm on a stacked autoencoder of arbitrary depth!
 +
 
 +
As a note, most stacked autoencoders don't go past five layers.

Revision as of 22:35, 21 April 2011

Personal tools