Exercise: Implement deep networks for digit classification

From Ufldl

Jump to: navigation, search
(Step 3: Implement fine-tuning)
Line 11: Line 11:
Open <tt>stackedAETrain.m</tt>. In this step, we set meta-parameters to the same values that were used in previous exercise, which should produce reasonable results. You may to modify the meta-parameters if you wish.
Open <tt>stackedAETrain.m</tt>. In this step, we set meta-parameters to the same values that were used in previous exercise, which should produce reasonable results. You may to modify the meta-parameters if you wish.
-
=== Step 1: Train the data on the first autoencoder ===
+
=== Step 1: Train the data on the first stacked autoencoder ===
Train the first autoencoder on the training images to obtain its parameters. This step is identical to the corresponding step in the sparse autoencoder and STL assignments, so if you have implemented your <tt>autoencoderCost.m</tt> correctly, this step should run properly without needing any modifications.  
Train the first autoencoder on the training images to obtain its parameters. This step is identical to the corresponding step in the sparse autoencoder and STL assignments, so if you have implemented your <tt>autoencoderCost.m</tt> correctly, this step should run properly without needing any modifications.  
-
=== Step 2: Train the data on the stacked autoencoder ===
+
=== Step 2: Train the data on the second stacked autoencoder ===
-
Run the training set through the first autoencoder to obtain hidden unit activation, then train this data on the second autoencoder. Since this is just an adapted application of a standard autoencoder, no changes to your coder should be required.
+
Run the training set through the first autoencoder to obtain hidden unit activation, then train this data on the second autoencoder. Since this is just an adapted application of a standard autoencoder, it should run identically with the first.
 +
 
 +
Note: This step assumes that you have changed the method signature of sparseAutoencoderCost from
 +
<tt>function [cost, grad] = sparseAutoencoderCost(...)</tt> to <tt>function [cost, grad, activation] = sparseAutoencoderCost(...)</tt> in the [[Exercise:Self-Taught_Learning|previous assignment]].
=== Step 3: Implement fine-tuning ===
=== Step 3: Implement fine-tuning ===
Line 37: Line 40:
When adding the weight decay term to the cost, only the weights for the topmost (softmax) layer need to be considered. Doing so does not impact the results adversely, but simplifies the implementation significantly.
When adding the weight decay term to the cost, only the weights for the topmost (softmax) layer need to be considered. Doing so does not impact the results adversely, but simplifies the implementation significantly.
-
=== Step 4: Cross-validation ===
+
=== Step 4: Test the model ===
After completing these steps, running the entire script in stackedAETrain.m will perform layer-wise training of the stacked autoencoder, finetune the model, and measure its performance on the test set. If you've done all the steps correctly, you should get an accuracy of about X percent.
After completing these steps, running the entire script in stackedAETrain.m will perform layer-wise training of the stacked autoencoder, finetune the model, and measure its performance on the test set. If you've done all the steps correctly, you should get an accuracy of about X percent.

Revision as of 19:08, 28 April 2011

Personal tools