Fine-tuning Stacked AEs

From Ufldl

Jump to: navigation, search
(Strategy)
Line 1: Line 1:
=== Introduction ===
=== Introduction ===
-
Fine tuning is a strategy that is commonly used to reduce the run time of a stacked autoencoder. It involves viewing all layers of a stacked autoencoder as a single model.
+
Fine tuning is a strategy that is commonly used in deep learning. As such, it can also be used to greatly improve the performance of a stacked autoencoder. From a high level perspective, fine tuning treats all layers of a stacked autoencoder as a single model, so that in one iteration, we are improving upon all the weights in the stacked autoencoder.
=== Strategy ===
=== Strategy ===
Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section.
Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section.

Revision as of 22:12, 21 April 2011

Personal tools