Fine-tuning Stacked AEs
From Ufldl
for
Fine-tuning Stacked AEs
Jump to:
navigation
,
search
=== Step 0: Setup === You should build on your files from previous assignments. === Step 1: Implement the Stacked Autoencoder === Using the method described in the previous section, train the stacked autoencoder layer by layer using greedy layer-wise training. === Step 2: Train the data on the stacked autoencoder === Train the data found via blah tired on your stacked autoencoder. Training can take up to 20-30 minutes per layer, so you may wish to save your outputs to a separate file. ==== Step 2a: Visualize the data ==== Blah later === Step 3: Implement fine tuning ==== Sleepyyyy === Step 4: Cross-validation === Test on MNIST data, print out percentage, should be around 97%.
Template:CNN
(
view source
)
Template:Languages
(
view source
)
Template:Quote
(
view source
)
Return to
Fine-tuning Stacked AEs
.
Views
Page
Discussion
View source
History
Personal tools
Log in
ufldl resources
UFLDL Tutorial
Recommended Readings
wiki
Main page
Recent changes
Random page
Help
Search
Toolbox
What links here
Related changes
Special pages