Fine-tuning Stacked AEs
From Ufldl
for
Fine-tuning Stacked AEs
Jump to:
navigation
,
search
=== Introduction === Fine tuning is a strategy that is commonly used in deep learning. As such, it can also be used to greatly improve the performance of a stacked autoencoder. From a high level perspective, fine tuning treats all layers of a stacked autoencoder as a single model, so that in one iteration, we are improving upon all the weights in the stacked autoencoder. === Strategy === Luckily, we already have all the tools necessary to implement fine tuning for stacked autoencoders! In order to compute the gradients for all the layers of the stacked autoencoder in each iteration, we use the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section. As the backpropagation algorithm can be extended to apply for an arbitrary number of layers, we can actually use this algorithm on a stacked autoencoder of arbitrary depth. As a note, most stacked autoencoders don't go past five layers.
Template:CNN
(
view source
)
Template:Languages
(
view source
)
Template:Quote
(
view source
)
Return to
Fine-tuning Stacked AEs
.
Views
Page
Discussion
View source
History
Personal tools
Log in
ufldl resources
UFLDL Tutorial
Recommended Readings
wiki
Main page
Recent changes
Random page
Help
Search
Toolbox
What links here
Related changes
Special pages