Fine-tuning Stacked AEs
From Ufldl
(→Strategy) |
|||
Line 1: | Line 1: | ||
=== Introduction === | === Introduction === | ||
- | Fine tuning is a strategy that is commonly used to | + | Fine tuning is a strategy that is commonly used in deep learning. As such, it can also be used to greatly improve the performance of a stacked autoencoder. From a high level perspective, fine tuning treats all layers of a stacked autoencoder as a single model, so that in one iteration, we are improving upon all the weights in the stacked autoencoder. |
=== Strategy === | === Strategy === | ||
Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section. | Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section. |