# Exercise:Convolution and Pooling

 Revision as of 17:45, 23 May 2011 (view source)Jngiam (Talk | contribs) (→Step 1: Load learned features)← Older edit Revision as of 17:48, 23 May 2011 (view source)Jngiam (Talk | contribs) (→Step 2a: Implement convolution)Newer edit → Line 34: Line 34: Implement convolution, as described in [[feature extraction using convolution]], in the function cnnConvolve in cnnConvolve.m. Implementing convolution is somewhat involved, so we will guide you through the process below. Implement convolution, as described in [[feature extraction using convolution]], in the function cnnConvolve in cnnConvolve.m. Implementing convolution is somewhat involved, so we will guide you through the process below. - First of all, what we want to compute is $\sigma(Wx_{(r,c)} + b)$ for all ''valid'' $(r, c)$ (''valid'' meaning that the entire 8x8 patch is contained within the image; as opposed to a ''full'' convolution which allows the patch to extend outside the image, with the area outside the image assumed to be 0) , where $W$ and $b$ are the learned weights and biases from the input layer to the hidden layer, and $x_{(r,c)}$ is the 8x8 patch with the upper left corner at $(r, c)$. To accomplish this, what we could do is loop over all such patches and compute $\sigma(Wx_{(r,c)} + b)$ for each of them. In theory, this is correct. However, in practice, the convolution is usually done in three small steps to take advantage of MATLAB's optimized convolution functions. + First, we want to compute $\sigma(Wx_{(r,c)} + b)$ for all ''valid'' $(r, c)$ (''valid'' meaning that the entire 8x8 patch is contained within the image; as opposed to a ''full'' convolution which allows the patch to extend outside the image, with the area outside the image assumed to be 0) , where $W$ and $b$ are the learned weights and biases from the input layer to the hidden layer, and $x_{(r,c)}$ is the 8x8 patch with the upper left corner at $(r, c)$. To accomplish this, one naive method is to loop over all such patches and compute $\sigma(Wx_{(r,c)} + b)$ for each of them; while this is fine in theory, it can very slow. Hence, we usually use Matlab's built in convolution functions which are well optimized. - Observe that the convolution above can be broken down into the following three small steps. First, compute $Wx_{(r,c)}$ for all $(r, c)$. Next, add b to all the computed values. Finally, apply the sigmoid function to the resultant values. This doesn't seem to buy you anything, since the first step still requires a loop. However, you can replace the loop in the first step with one of MATLAB's optimized convolution functions, conv2, speeding up the process slightly. + Observe that the convolution above can be broken down into the following three small steps. First, compute $Wx_{(r,c)}$ for all $(r, c)$. Next, add b to all the computed values. Finally, apply the sigmoid function to the resultant values. This doesn't seem to buy you anything, since the first step still requires a loop. However, you can replace the loop in the first step with one of MATLAB's optimized convolution functions, conv2, speeding up the process significantly. - However, there are two complications in using conv2. + However, there are two important points to note in using conv2. First, conv2 performs a 2-D convolution, but you have 5 "dimensions" - image number, feature number, row of image, column of image, and channel of image - that you want to convolve over. Because of this, you will have to convolve each feature and image channel separately for each image, using the row and column of the image as the 2 dimensions you convolve over. This means that you will need three outer loops over the image number imageNum, feature number featureNum, and the channel number of the image channel, with the 2-D convolution of the weight matrix for the featureNum-th feature and channel-th channel with the image matrix for the imageNum-th image going inside. First, conv2 performs a 2-D convolution, but you have 5 "dimensions" - image number, feature number, row of image, column of image, and channel of image - that you want to convolve over. Because of this, you will have to convolve each feature and image channel separately for each image, using the row and column of the image as the 2 dimensions you convolve over. This means that you will need three outer loops over the image number imageNum, feature number featureNum, and the channel number of the image channel, with the 2-D convolution of the weight matrix for the featureNum-th feature and channel-th channel with the image matrix for the imageNum-th image going inside. Line 88: Line 88: - To each of convolvedFeatures, you should then add b, the corresponding bias for the featureNum-th feature. If you had done no preprocessing of the patches, you could then apply the sigmoid function to obtain the convolved features. However, because you preprocessed the patches before learning features on them, you must also apply the same preprocessing steps to the convolved patches to get the correct feature activations. + To each of convolvedFeatures, you should then add b, the corresponding bias for the featureNum-th feature. If you had not done any preprocessing of the patches, you could then apply the sigmoid function to obtain the convolved features. However, because you preprocessed the patches before learning features on them, you must also apply the same preprocessing steps to the convolved patches to get the correct feature activations. In particular, you did the following to the patches: In particular, you did the following to the patches: