Exercise:Self-Taught Learning
From Ufldl
(→Step 2: Train the sparse autoencoder) |
|||
Line 43: | Line 43: | ||
===Step 3: Extracting features=== | ===Step 3: Extracting features=== | ||
- | After the sparse autoencoder is trained, | + | After the sparse autoencoder is trained, you will use it to extract features from the handwritten digit images. |
- | Complete <tt>feedForwardAutoencoder.m</tt> to produce a matrix whose columns correspond to | + | Complete <tt>feedForwardAutoencoder.m</tt> to produce a matrix whose columns correspond to activations of the hidden layer for each example, i.e., the vector <math>a^{(2)}</math> corresponding to activation of layer 2. (Recall that we treat the inputs as layer 1). |
- | After | + | After completing this step, calling <tt>feedForwardAutoencoder.m</tt> should convert the raw image data to hidden unit activations <math>a^{(2)}</math>. |
===Step 4: Training and testing the logistic regression model=== | ===Step 4: Training and testing the logistic regression model=== | ||
- | + | Use your code from the softmax exercise (<tt>softmaxTrain.m</tt>) to train a softmax classifier using the training set features (<tt>trainFeatures</tt>) and labels (<tt>trainLabels</tt>). | |
===Step 5: Classifying on the test set=== | ===Step 5: Classifying on the test set=== | ||
- | Finally, complete the code to make predictions on the test set (<tt>testFeatures</tt>) and see how your learned features perform! If you've done all the steps correctly, you should get an accuracy of about '''98%''' percent. | + | Finally, complete the code to make predictions on the test set (<tt>testFeatures</tt>) and see how your learned features perform! If you've done all the steps correctly, you should get an accuracy of about '''98%''' percent. |
+ | |||
+ | As a comparison, when ''raw pixels'' are used (instead of the learned features), we obtained a test accuracy of only around 96% (for the same train and test sets). | ||
[[Category:Exercises]] | [[Category:Exercises]] | ||
+ | |||
+ | |||
+ | {{STL}} |