mx.symbol.nag_mom_update
¶
Description¶
Update function for Nesterov Accelerated Gradient( NAG) optimizer. It updates the weights using the following formula,
Where \(\eta\) is the learning rate of the optimizer \(\gamma\) is the decay rate of the momentum estimate \(\v_t\) is the update vector at time step t \(\W_t\) is the weight vector at time step t
Usage¶
mx.symbol.nag_mom_update(...)
Arguments¶
Argument |
Description |
---|---|
|
NDArray-or-Symbol. Weight |
|
NDArray-or-Symbol. Gradient |
|
NDArray-or-Symbol. Momentum |
|
float, required. Learning rate |
|
float, optional, default=0. The decay rate of momentum estimates at each epoch. |
|
float, optional, default=0. Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight. |
|
float, optional, default=1. Rescale gradient to grad = rescale_grad*grad. |
|
float, optional, default=-1. Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient). |
|
string, optional. Name of the resulting symbol. |
Value¶
out
The result mx.symbol
Link to Source Code: http://github.com/apache/incubator-mxnet/blob/1.6.0/src/operator/optimizer_op.cc#L726