Deep Networks: Overview

From Ufldl

Jump to: navigation, search
(Expressiveness and compactness)
Line 8: Line 8:
===Expressiveness and compactness===
===Expressiveness and compactness===
-
Deep architectures can model a greater range of functions than shallow architectures. Further, with deep architectures, these functions can be modelled with less components (neurons in the case of neural networks) than in the equivalent shallow architectures. In fact, there are functions that a k-layer architecture can represent compactly (with the number of components ''polynomial'' in the number of inputs), but a (k-1)-layer architecture cannot(with the number of components ''exponential'' in the number of inputs).  
+
Deep architectures can model a greater range of functions than shallow architectures. Further, with deep architectures, these functions can be modelled with less components (neurons in the case of neural networks) than in the equivalent shallow architectures. In fact, there are functions that a k-layer architecture can represent compactly (with the number of components ''polynomial'' in the number of inputs), but a (k-1)-layer architecture cannot (with the number of components ''exponential'' in the number of inputs).  
For example, in a boolean network in which alternate layers implement the logical OR and logical AND of preceding layers, the parity function would require and exponential number of components to be represented in a 2-layer network, but a polynomial number of components if represented in a network of sufficient depth.  
For example, in a boolean network in which alternate layers implement the logical OR and logical AND of preceding layers, the parity function would require and exponential number of components to be represented in a 2-layer network, but a polynomial number of components if represented in a network of sufficient depth.  

Revision as of 02:53, 21 April 2011

Personal tools