Fine-tuning Stacked AEs

From Ufldl

Jump to: navigation, search
(Created page with "=== Step 0: Setup === You should build on your files from previous assignments. === Step 1: Implement the Stacked Autoencoder === Using the method described in the previous sect...")
(Blanked the page)
Line 1: Line 1:
-
=== Step 0: Setup ===
 
-
You should build on your files from previous assignments.
 
-
=== Step 1: Implement the Stacked Autoencoder ===
 
-
Using the method described in the previous section, train the stacked autoencoder layer by layer using greedy layer-wise training.
 
-
 
-
=== Step 2: Train the data on the stacked autoencoder ===
 
-
Train the data found via blah tired on your stacked autoencoder. Training can take up to 20-30 minutes per layer, so you may wish to save your outputs to a separate file.
 
-
==== Step 2a: Visualize the data ====
 
-
Blah later
 
-
 
-
=== Step 3: Implement fine tuning ====
 
-
Sleepyyyy
 
-
 
-
=== Step 4: Cross-validation ===
 
-
Test on MNIST data, print out percentage, should be around 97%.
 

Revision as of 15:56, 21 April 2011

Personal tools