Deep Networks: Overview

From Ufldl

Jump to: navigation, search
(Expressiveness and compactness)
Line 1: Line 1:
-
== What is a deep architecture? ==  
+
== Deep architectures ==  
-
In brief, a deep architecture is a multi-layer architecture comprising non-linear functions at each level. (To see why the functions must be non-linear, consider the effect of composing multiple linear functions). Deep arhitectures are favoured over shallow architectures for their greater expressive power and greater ability to generalise, among other things, as will be described in greater detail below.
+
A deep architecture is a multi-layer architecture comprising non-linear functions at each level. (To see why the functions must be non-linear, consider the effect of composing multiple linear functions). Deep arhitectures are favoured over shallow architectures for their greater expressive power and greater ability to generalize, among other things, as will be described in greater detail below.
To give a concrete example of a deep architecture: in the earlier sections you constructed a 3-layer neural network comprising an input, hidden and output layer. Such a network would be considered a shallow architecture, since it contains only 1 hidden layer. If you added more hidden layers to your network, as you will be doing later in this section, it would become a deep architecture.
To give a concrete example of a deep architecture: in the earlier sections you constructed a 3-layer neural network comprising an input, hidden and output layer. Such a network would be considered a shallow architecture, since it contains only 1 hidden layer. If you added more hidden layers to your network, as you will be doing later in this section, it would become a deep architecture.
-
== Why use deep architectures? ==
+
== Advantages of deep architectures ==
===Expressiveness and compactness===
===Expressiveness and compactness===
-
Deep architectures can model a greater range of functions than shallow architectures. Further, with deep architectures, these functions can be modelled with less components (neurons in the case of neural networks) than in the equivalent shallow architectures. In fact, there are functions that a k-layer architecture can represent compactly (with the number of components ''polynomial'' in the number of inputs), but a (k-1)-layer architecture cannot (with the number of components ''exponential'' in the number of inputs).  
+
 
 +
Deep architectures can model a greater range of functions than shallow architectures. Further, with deep architectures, these functions can be modeled with less components (neurons in the case of neural networks) than in the equivalent shallow architectures. In fact, there are functions that a k-layer architecture can represent compactly (with the number of components ''polynomial'' in the number of inputs), but a (k-1)-layer architecture cannot (with the number of components ''exponential'' in the number of inputs).  
For example, in a boolean network in which alternate layers implement the logical OR and logical AND of preceding layers, the parity function would require and exponential number of components to be represented in a 2-layer network, but a polynomial number of components if represented in a network of sufficient depth.  
For example, in a boolean network in which alternate layers implement the logical OR and logical AND of preceding layers, the parity function would require and exponential number of components to be represented in a 2-layer network, but a polynomial number of components if represented in a network of sufficient depth.  
-
Informally, one way a deep architecture helps in representing functions compactly is through ''factorisation''. Factorisation, as the name suggests, occurs when the network represents at lower layers functions of the input that are then reused multiple times at higher layers. To gain some intuition for this, consider an arithmetic network for computing the values of polynomials, in which alternate layers implement addition and multiplication. In this network, an intermediate layer could compute the values of terms which are then used repeatedly in the next higher layer, the results of which are used repeatedly in the next higher layer, and so on.
+
Informally, one way a deep architecture helps in representing functions compactly is through ''factorization''. Factorization, as the name suggests, occurs when the network represents at lower layers functions of the input that are then reused multiple times at higher layers. To gain some intuition for this, consider an arithmetic network for computing the values of polynomials, in which alternate layers implement addition and multiplication. In this network, an intermediate layer could compute the values of terms which are then used repeatedly in the next higher layer, the results of which are used repeatedly in the next higher layer, and so on.
===Statistical efficiency===
===Statistical efficiency===
Another upshot of the compact representation that deep architectures afford is statistical efficiency - less training data is needed to tune the comparatively smaller number of parameters in a compact representation.  
Another upshot of the compact representation that deep architectures afford is statistical efficiency - less training data is needed to tune the comparatively smaller number of parameters in a compact representation.  
-
== How should we train deep architectures? ==
+
== Difficulty of training deep architectures ==
-
While the benefits of deep architectures in terms of their compactness and expressive power have been appreciated for many decades, before 2006, researchers had little success in training deep architectures. Training a randomly initialised deep architecture often led to poor results. But why should this be the case?
+
While the benefits of deep architectures in terms of their compactness and expressive power have been appreciated for many decades, before 2006, researchers had little success in training deep architectures. Training a randomly initialized deep architecture often led to poor results.  
===Why random initialisation fails===
===Why random initialisation fails===
Line 34: Line 35:
===Greedy layer-wise training===
===Greedy layer-wise training===
-
How should deep architectures be trained then? One method that has seen some success is the '''greedy layer-wise training''' method. In this method, the layers of the architecture are trained one at a time, with the input being the output of the previous layer (which has been trained). Training can either be supervised (say, with classification error as the objective function), or unsupervised (say, with the error of the layer in reconstructing its input as the objective function, as in an autoencoder). The weights from training the layers individually are then used to initialise the weights in the deep architecture, and only then is the entire architecture '''fine-tuned''', that is, trained together. The success of greedy layer-wise training has been attributed to a number of factors:
+
How should deep architectures be trained then? One method that has seen some success is the '''greedy layer-wise training''' method. In this method, the layers of the architecture are trained one at a time, with the input being the output of the previous layer (which has been trained). Training can either be supervised (say, with classification error as the objective function), or unsupervised (say, with the error of the layer in reconstructing its input as the objective function, as in an autoencoder). The weights from training the layers individually are then used to initialize the weights in the deep architecture, and only then is the entire architecture '''fine-tuned''', that is, trained together. The success of greedy layer-wise training has been attributed to a number of factors:
-
====Regularisation and better optima====
+
====Regularization and better local optima====
-
Because the weights of the layers have already been initialised to reasonable values, the final solution is somewhat constrained to be near the good initial solution (which may be seen as a prior on the parameters). Furthermore, training starts at a better location than when the weights are randomly initialised, vastly increasing the likelihood of obtaining a better local optima.  
+
Because the weights of the layers have already been initialized to reasonable values, the final solution is somewhat constrained to be near the good initial solution (which may be seen as a prior on the parameters). Furthermore, training starts at a better location than when the weights are randomly initialized, vastly increasing the likelihood of obtaining a better local optima.  
====Feature learning====
====Feature learning====

Revision as of 18:36, 11 May 2011

Personal tools