Exercise:PCA and Whitening

From Ufldl

Jump to: navigation, search
Line 8: Line 8:
==== Step 0a: Load data ====
==== Step 0a: Load data ====
-
The starter code contains code to load some natural images and sample 10000 12x12 patches from them. The raw patches sampled from the images will look something like this:
+
The starter code contains code to load some 12x12 natural images patches samples from natural images. The raw patches sampled from the images will look something like this:
[[File:raw_images.png|240px|alt=Raw images|Raw images]]
[[File:raw_images.png|240px|alt=Raw images|Raw images]]
-
These patches are stored as column vectors <math>x^{(i)} \in \mathbb{R}^{196}</math> in the <math>196 \times 10000</math> matrix <math>x</math>.
+
These patches are stored as column vectors <math>x^{(i)} \in \mathbb{R}^{144}</math> in the <math>144 \times 10000</math> matrix <math>x</math>.
==== Step 0b: Zero mean the data ====
==== Step 0b: Zero mean the data ====
Line 38: Line 38:
Now that you have found <math>k</math>, you can reduce the dimension of the data by discarding the remaining dimensions. In this way, you can represent the data in <math>k</math> dimensions instead of the original 144, which will save you computational time when running learning algorithms on the reduced representation.
Now that you have found <math>k</math>, you can reduce the dimension of the data by discarding the remaining dimensions. In this way, you can represent the data in <math>k</math> dimensions instead of the original 144, which will save you computational time when running learning algorithms on the reduced representation.
-
To see the effect of dimension reduction, invert the PCA transformation to produce the matrix <math>\hat{x}</math>, the dimension-reduced data with respect to the original basis. Visualise <math>\hat{x}</math> and compare it to the raw data, <math>x</math>. You will observe that there is little loss due to throwing away the principal components that correspond to dimensions with low variation. For comparison, you may also wish to generate and visualise <math>\hat{x}</math> for when only 50% of the variance is retained.
+
To see the effect of dimension reduction, invert the PCA transformation to produce the matrix <math>\hat{x}</math>, the dimension-reduced data with respect to the original basis. Visualise <math>\hat{x}</math> and compare it to the raw data, <math>x</math>. You will observe that there is little loss due to throwing away the principal components that correspond to dimensions with low variation. For comparison, you may also wish to generate and visualise <math>\hat{x}</math> for when only 90% of the variance is retained.
<table>
<table>
Line 44: Line 44:
<td>[[File:pca_images.png|240px|alt=PCA dimension-reduced images (99% variance)|PCA dimension-reduced images (99% variance)]]</td>
<td>[[File:pca_images.png|240px|alt=PCA dimension-reduced images (99% variance)|PCA dimension-reduced images (99% variance)]]</td>
<td>[[File:raw_images.png|240px|alt=Raw images|Raw images]]</td>  
<td>[[File:raw_images.png|240px|alt=Raw images|Raw images]]</td>  
-
<td>[[File:pca_images_50.png|240px|alt=PCA dimension-reduced images (50% variance)|PCA dimension-reduced images (50% variance)]]</td>
+
<td>[[File:pca_images_90.png|240px|alt=PCA dimension-reduced images (90% variance)|PCA dimension-reduced images (50% variance)]]</td>
</tr>
</tr>
<tr>
<tr>
<td>PCA dimension-reduced images<br /> (99% variance)</td>
<td>PCA dimension-reduced images<br /> (99% variance)</td>
<td>Raw images <br /> &nbsp; </td>
<td>Raw images <br /> &nbsp; </td>
-
<td>PCA dimension-reduced images<br /> (50% variance)</td>
+
<td>PCA dimension-reduced images<br /> (90% variance)</td>
</tr>
</tr>
</table>
</table>

Revision as of 08:04, 29 April 2011

Personal tools