稀疏自编码重述
From Ufldl
(→Sparse Autoencoder Recap) |
|||
Line 1: | Line 1: | ||
- | + | 翻译者:严晓东,yan.endless@gmail.com,新浪微博:@月蝕-eclipse | |
+ | 校对者:林锋,email: xlfg@yeah.net, 新浪微博:@大黄蜂的思索 | ||
+ | Wiki上传者:严晓东,email:yan.endless,新浪微博:@GuitarFang | ||
+ | |||
== Sparse Autoencoder Recap == | == Sparse Autoencoder Recap == | ||
- | 【初译】: | + | :【初译】: |
稀疏自编码重述 | 稀疏自编码重述 | ||
- | 【一校】: | + | :【一校】: |
稀疏自编码重述 | 稀疏自编码重述 | ||
- | 【原文】: | + | :【原文】: |
In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description | In the sparse autoencoder, we had 3 layers of neurons: an input layer, a hidden layer and an output layer. In our previous description | ||
Line 17: | Line 20: | ||
This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters. | This will result in a model that is sometimes simpler to apply, and can also be more robust to variations in the parameters. | ||
- | 【初译】: | + | :【初译】: |
在稀疏自编码中,有三层:输入层,隐含层和输出层。在之前对自编码的定义(在神经网络中),位于神经网络中的每个神经元采用相同激励机制。在这些记录中,我们描述了一个修改版的自编码,其中一些神经元采用另外的激励机制。这产生一个更简易于应用,针对参数变化稳健性更佳的模型。 | 在稀疏自编码中,有三层:输入层,隐含层和输出层。在之前对自编码的定义(在神经网络中),位于神经网络中的每个神经元采用相同激励机制。在这些记录中,我们描述了一个修改版的自编码,其中一些神经元采用另外的激励机制。这产生一个更简易于应用,针对参数变化稳健性更佳的模型。 |