| Package | Description |
|---|---|
| org.nd4j.linalg.learning |
| Modifier and Type | Class and Description |
|---|---|
class |
AdaDelta
http://www.matthewzeiler.com/pubs/googleTR2012/googleTR2012.pdf
Ada delta updater.
|
class |
AdaGrad
Vectorized Learning Rate used per Connection Weight
Adapted from: http://xcorr.net/2014/01/23/adagrad-eliminating-learning-rates-in-stochastic-gradient-descent/
|
class |
Adam
The Adam updater.
|
class |
Nesterovs
Nesterov's momentum.
|
class |
RmsProp
RMS Prop updates:
http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
http://cs231n.github.io/neural-networks-3/#ada
|
class |
Sgd |
| Modifier and Type | Method and Description |
|---|---|
GradientUpdater |
RmsProp.RmsPropAggregator.getUpdater() |
GradientUpdater |
Adam.AdamAggregator.getUpdater() |
GradientUpdater |
AdaGrad.AdaGradAggregator.getUpdater() |
GradientUpdater |
Nesterovs.NesterovsAggregator.getUpdater() |
GradientUpdater |
AdaDelta.AdaDeltaAggregator.getUpdater() |
GradientUpdater |
Sgd.SgdAggregator.getUpdater() |
GradientUpdater |
GradientUpdaterAggregator.getUpdater()
Get the final updater after aggregation
|
| Modifier and Type | Method and Description |
|---|---|
void |
RmsProp.RmsPropAggregator.aggregate(GradientUpdater updater) |
void |
Adam.AdamAggregator.aggregate(GradientUpdater updater) |
void |
AdaGrad.AdaGradAggregator.aggregate(GradientUpdater updater) |
void |
Nesterovs.NesterovsAggregator.aggregate(GradientUpdater updater) |
void |
AdaDelta.AdaDeltaAggregator.aggregate(GradientUpdater updater) |
void |
Sgd.SgdAggregator.aggregate(GradientUpdater updater) |
void |
GradientUpdaterAggregator.aggregate(GradientUpdater updater)
Add/aggregate a GradientUpdater with this GradientUpdaterAggregator.
|
Copyright © 2016. All Rights Reserved.