Exercise:Convolution and Pooling

From Ufldl

Jump to: navigation, search
(Step 2a: Implement convolution)
(Step 4: Use pooled features for classification)
 
Line 1: Line 1:
== Convolution and Pooling ==
== Convolution and Pooling ==
-
In this problem set, you will use the features you learned on 8x8 patches sampled from images from the STL10 dataset in [[Exercise:Learning color features with Sparse Autoencoders | the earlier exercise on linear decoders]] for classifying images from a reduced STL10 dataset applying [[Feature extraction using convolution | convolution]] and [[Pooling | pooling]]. The reduced STL10 dataset comprises 64x64 images from 4 classes (airplane, car, cat, dog).
+
In this exercise you will use the features you learned on 8x8 patches sampled from images from the STL-10 dataset in [[Exercise:Learning color features with Sparse Autoencoders | the earlier exercise on linear decoders]] for classifying images from a reduced STL-10 dataset applying [[Feature extraction using convolution | convolution]] and [[Pooling | pooling]]. The reduced STL-10 dataset comprises 64x64 images from 4 classes (airplane, car, cat, dog).
In the file <tt>[http://ufldl.stanford.edu/wiki/resources/cnn_exercise.zip cnn_exercise.zip]</tt> we have provided some starter code. You should write your code at the places indicated "YOUR CODE HERE" in the files.
In the file <tt>[http://ufldl.stanford.edu/wiki/resources/cnn_exercise.zip cnn_exercise.zip]</tt> we have provided some starter code. You should write your code at the places indicated "YOUR CODE HERE" in the files.
Line 22: Line 22:
=== Step 1: Load learned features ===
=== Step 1: Load learned features ===
-
In this step, we will load the color features you learned in [[Exercise:Learning color features with Sparse Autoencoders]]. To verify that the features are correct, the loaded features will be visualized, and you should get something like the following:
+
In this step, you will use the features from  [[Exercise:Learning color features with Sparse Autoencoders]]. If you have completed that exercise, you can load the color features that were previously saved. To verify that the features are good, the visualized features should look like the following:
[[File:CNN_Features_Good.png|300px]]
[[File:CNN_Features_Good.png|300px]]
Line 28: Line 28:
=== Step 2: Implement and test convolution and pooling ===
=== Step 2: Implement and test convolution and pooling ===
-
In this step, you will implement convolution and pooling, and test them on a small part of the data set to ensure that you have implemented these two functions correctly. In the next step, you will actually convolve and pool the features with the STL10 images.
+
In this step, you will implement convolution and pooling, and test them on a small part of the data set to ensure that you have implemented these two functions correctly. In the next step, you will actually convolve and pool the features with the STL-10 images.
==== Step 2a: Implement convolution ====
==== Step 2a: Implement convolution ====
Line 34: Line 34:
Implement convolution, as described in [[feature extraction using convolution]], in the function <tt>cnnConvolve</tt> in <tt>cnnConvolve.m</tt>. Implementing convolution is somewhat involved, so we will guide you through the process below.
Implement convolution, as described in [[feature extraction using convolution]], in the function <tt>cnnConvolve</tt> in <tt>cnnConvolve.m</tt>. Implementing convolution is somewhat involved, so we will guide you through the process below.
-
First of all, what we want to compute is <math>\sigma(Wx_{(r,c)} + b)</math> for all ''valid'' <math>(r, c)</math> (''valid'' meaning that the entire 8x8 patch is contained within the image; as opposed to a ''full'' convolution which allows the patch to extend outside the image, with the area outside the image assumed to be 0) , where <math>W</math> and <math>b</math> are the learned weights and biases from the input layer to the hidden layer, and <math>x_{(r,c)}</math> is the 8x8 patch with the upper left corner at <math>(r, c)</math>. To accomplish this, what we could do is loop over all such patches and compute <math>\sigma(Wx_{(r,c)} + b)</math> for each of them. In theory, this is correct. However, in practice, the convolution is usually done in three small steps to take advantage of MATLAB's optimized convolution functions.
+
First, we want to compute <math>\sigma(Wx_{(r,c)} + b)</math> for all ''valid'' <math>(r, c)</math> (''valid'' meaning that the entire 8x8 patch is contained within the image; this is as opposed to a ''full'' convolution, which allows the patch to extend outside the image, with the area outside the image assumed to be 0), where <math>W</math> and <math>b</math> are the learned weights and biases from the input layer to the hidden layer, and <math>x_{(r,c)}</math> is the 8x8 patch with the upper left corner at <math>(r, c)</math>. To accomplish this, one naive method is to loop over all such patches and compute <math>\sigma(Wx_{(r,c)} + b)</math> for each of them; while this is fine in theory, it can very slow. Hence, we usually use Matlab's built in convolution functions, which are well optimized.
-
Observe that the convolution above can be broken down into the following three small steps. First, compute <math>Wx_{(r,c)}</math> for all <math>(r, c)</math>. Next, add b to all the computed values. Finally, apply the sigmoid function to the resultant values. This doesn't seem to buy you anything, since the first step still requires a loop. However, you can replace the loop in the first step with one of MATLAB's optimized convolution functions, <tt>conv2</tt>, speeding up the process slightly.
+
Observe that the convolution above can be broken down into the following three small steps. First, compute <math>Wx_{(r,c)}</math> for all <math>(r, c)</math>. Next, add b to all the computed values. Finally, apply the sigmoid function to the resulting values. This doesn't seem to buy you anything, since the first step still requires a loop. However, you can replace the loop in the first step with one of MATLAB's optimized convolution functions, <tt>conv2</tt>, speeding up the process significantly.
-
However, there are two complications in using <tt>conv2</tt>.  
+
However, there are two important points to note in using <tt>conv2</tt>.  
-
First, <tt>conv2</tt> performs a 2-D convolution, but you have 5 "dimensions" - image number, feature number, row of image, column of image, and channel of image - that you want to convolve over. Because of this, you will have to convolve each feature and image channel separately for each image, using the row and column of the image as the 2 dimensions you convolve over. This means that you will need three outer loops over the image number <tt>imageNum</tt>, feature number <tt>featureNum</tt>, and the channel number of the image <tt>channel</tt>, with the 2-D convolution of the weight matrix for the <tt>featureNum</tt>-th feature and <tt>channel</tt>-th channel with the image matrix for the <tt>imageNum</tt>-th image going inside.  
+
First, <tt>conv2</tt> performs a 2-D convolution, but you have 5 "dimensions" - image number, feature number, row of image, column of image, and (color) channel of image - that you want to convolve over. Because of this, you will have to convolve each feature and image channel separately for each image, using the row and column of the image as the 2 dimensions you convolve over. This means that you will need three outer loops over the image number <tt>imageNum</tt>, feature number <tt>featureNum</tt>, and the channel number of the image <tt>channel</tt>.  Inside the three nested for-loops, you will perform a <tt>conv2</tt> 2-D convolution, using the weight matrix for the <tt>featureNum</tt>-th feature and <tt>channel</tt>-th channel, and the image matrix for the <tt>imageNum</tt>-th image.  
Second, because of the mathematical definition of convolution, the feature matrix must be "flipped" before passing it to <tt>conv2</tt>. The following implementation tip explains the "flipping" of feature matrices when using MATLAB's convolution functions:
Second, because of the mathematical definition of convolution, the feature matrix must be "flipped" before passing it to <tt>conv2</tt>. The following implementation tip explains the "flipping" of feature matrices when using MATLAB's convolution functions:
Line 77: Line 77:
</math>
</math>
-
If the original layout of <tt>W</tt> was correct, after flipping, it would be incorrect. For the layout to be correct after flipping, you will have to flip <tt>W</tt> before passing it into <tt>conv2</tt>, so that after MATLAB flips <tt>W</tt> in <tt>conv2</tt>, the layout will be correct. For <tt>conv2</tt>, this means reversing the rows and columns, which can be done with <tt>flipud</tt> and <tt>fliplr</tt>, as we did in the example code above. This is also true for the general convolution function <tt>convn</tt>, in which case MATLAB reverses every dimension. In general, you can flip the matrix <tt>W</tt> using the following code snippet, which works for <tt>W</tt> of any dimension
+
If the original layout of <tt>W</tt> was correct, after flipping, it would be incorrect. For the layout to be correct after flipping, you will have to flip <tt>W</tt> before passing it into <tt>conv2</tt>, so that after MATLAB flips <tt>W</tt> in <tt>conv2</tt>, the layout will be correct. For <tt>conv2</tt>, this means reversing the rows and columns, which can be done with <tt>flipud</tt> and <tt>fliplr</tt>, as shown below:
<syntaxhighlight lang="matlab">
<syntaxhighlight lang="matlab">
-
% Flip W for use in conv2 / convn
+
% Flip W for use in conv2
-
temp = W(:);
+
W = flipud(fliplr(W));
-
temp = flipud(temp);
+
-
temp = reshape(temp, size(W));
+
</syntaxhighlight>
</syntaxhighlight>
</div>
</div>
-
To each of <tt>convolvedFeatures</tt>, you should then add <tt>b</tt>, the corresponding bias for the <tt>featureNum</tt>-th feature. If you had done no preprocessing of the patches, you could then apply the sigmoid function to obtain the convolved features. However, because you preprocessed the patches before learning features on them, you must also apply the same preprocessing steps to the convolved patches to get the correct feature activations.
+
Next, to each of the <tt>convolvedFeatures</tt>, you should then add <tt>b</tt>, the corresponding bias for the <tt>featureNum</tt>-th feature.
 +
 
 +
However, there is one additional complication.  If we had not done any preprocessing of the input patches, you could just follow the procedure as described above, and apply the sigmoid function to obtain the convolved features, and we'd be done. However, because you preprocessed the patches before learning features on them, you must also apply the same preprocessing steps to the convolved patches to get the correct feature activations.
In particular, you did the following to the patches:
In particular, you did the following to the patches:
Line 95: Line 95:
<li> ZCA whiten using the whitening matrix <tt>ZCAWhite</tt>.
<li> ZCA whiten using the whitening matrix <tt>ZCAWhite</tt>.
</ol>
</ol>
-
These same three steps must also be applied to the convolved patches.  
+
These same three steps must also be applied to the input image patches.  
Taking the preprocessing steps into account, the feature activations that you should compute is <math>\sigma(W(T(x-\bar{x})) + b)</math>, where <math>T</math> is the whitening matrix and <math>\bar{x}</math> is the mean patch. Expanding this, you obtain <math>\sigma(WTx - WT\bar{x} + b)</math>, which suggests that you should convolve the images with <math>WT</math> rather than <math>W</math> as earlier, and you should add <math>(b - WT\bar{x})</math>, rather than just <math>b</math> to <tt>convolvedFeatures</tt>, before finally applying the sigmoid function.
Taking the preprocessing steps into account, the feature activations that you should compute is <math>\sigma(W(T(x-\bar{x})) + b)</math>, where <math>T</math> is the whitening matrix and <math>\bar{x}</math> is the mean patch. Expanding this, you obtain <math>\sigma(WTx - WT\bar{x} + b)</math>, which suggests that you should convolve the images with <math>WT</math> rather than <math>W</math> as earlier, and you should add <math>(b - WT\bar{x})</math>, rather than just <math>b</math> to <tt>convolvedFeatures</tt>, before finally applying the sigmoid function.
Line 105: Line 105:
==== Step 2c: Pooling ====
==== Step 2c: Pooling ====
-
Implement [[pooling]] in the function <tt>cnnPool</tt> in <tt>cnnPool.m</tt>.
+
Implement [[pooling]] in the function <tt>cnnPool</tt> in <tt>cnnPool.m</tt>. You should implement ''mean'' pooling (i.e., averaging over feature responses) for this part.
==== Step 2d: Check your pooling ====
==== Step 2d: Check your pooling ====
Line 113: Line 113:
=== Step 3: Convolve and pool with the dataset ===
=== Step 3: Convolve and pool with the dataset ===
-
In this step, you will convolve each of the features you learned with the full 64x64 images from the STL dataset to obtain the convolved features for both train and test sets. You will then pool the convolved features to obtain the pooled features for both train and test sets. The pooled features for the train set will be used for classification, and those for the test set will be used to test the trained classifier.
+
In this step, you will convolve each of the features you learned with the full 64x64 images from the STL-10 dataset to obtain the convolved features for both the training and test sets. You will then pool the convolved features to obtain the pooled features for both training and test sets. The pooled features for the training set will be used to train your  classifier, which you can then test on the test set.
Because the convolved features matrix is very large, the code provided does the convolution and pooling 50 features at a time to avoid running out of memory.
Because the convolved features matrix is very large, the code provided does the convolution and pooling 50 features at a time to avoid running out of memory.
Line 119: Line 119:
=== Step 4: Use pooled features for classification ===
=== Step 4: Use pooled features for classification ===
-
In this step, you will use the pooled features to train a softmax classifier to map the pooled features to the class labels. The code in this section uses <tt>softmaxTrain</tt> from the softmax exercise to train a softmax classifier on the pooled features for 500 iterations, which should take around 5 minutes.
+
In this step, you will use the pooled features to train a softmax classifier to map the pooled features to the class labels. The code in this section uses <tt>softmaxTrain</tt> from the softmax exercise to train a softmax classifier on the pooled features for 500 iterations, which should take around a few minutes.
-
=== Step 4: Test classifier ===
+
=== Step 5: Test classifier ===
-
Now that you have a trained softmax classifier, you can see how well it performs on the test set. These pooled features for the test set will be run through the softmax classifier, and the accuracy of the predictions will be computed. You should expect to get an accuracy of around 77-78%.
+
Now that you have a trained softmax classifier, you can see how well it performs on the test set. These pooled features for the test set will be run through the softmax classifier, and the accuracy of the predictions will be computed. You should expect to get an accuracy of around 80%.

Latest revision as of 19:16, 3 June 2011

Personal tools