Exercise: Implement deep networks for digit classification
From Ufldl
(→Overview) |
|||
Line 1: | Line 1: | ||
===Overview=== | ===Overview=== | ||
- | In this exercise, you will use a stacked autoencoder for digit classification. This exercise is very similar to the self-taught learning exercise, in which we trained a digit classifier using a autoencoder layer followed by a softmax layer. The only difference in this exercise is that we will be using two autoencoder layers instead of one. | + | In this exercise, you will use a stacked autoencoder for digit classification. This exercise is very similar to the self-taught learning exercise, in which we trained a digit classifier using a autoencoder layer followed by a softmax layer. The only difference in this exercise is that we will be using two autoencoder layers instead of one and further finetune the two layers. |
- | The code you have already implemented will allow you to stack various layers and perform layer-wise training. However, to perform fine-tuning, you will need to implement | + | The code you have already implemented will allow you to stack various layers and perform layer-wise training. However, to perform fine-tuning, you will need to implement backpropogation through both layers. We will see that fine-tuning significantly improves the model's performance. |
- | In the file [http://ufldl.stanford.edu/wiki/resources/stackedae_exercise.zip stackedae_exercise.zip], we have provided some starter code. You will need to | + | In the file [http://ufldl.stanford.edu/wiki/resources/stackedae_exercise.zip stackedae_exercise.zip], we have provided some starter code. You will need to complete the code in '''<tt>stackedAECost.m</tt>''', '''<tt>stackedAEPredict.m</tt>''' and '''<tt>stackedAEExercise.m</tt>'''. We have also provided <tt>params2stack.m</tt> and <tt>stack2params.m</tt> which you might find helpful in constructing deep networks. |
=== Dependencies === | === Dependencies === |