mx.nd.nag.mom.update

Description

Update function for Nesterov Accelerated Gradient( NAG) optimizer. It updates the weights using the following formula,

\[\begin{split}v_t = \gamma v_{t-1} + \eta * \nabla J(W_{t-1} - \gamma v_{t-1})\\ W_t = W_{t-1} - v_t\end{split}\]

Where \(\eta\) is the learning rate of the optimizer \(\gamma\) is the decay rate of the momentum estimate \(\v_t\) is the update vector at time step t \(\W_t\) is the weight vector at time step t

Arguments

Argument

Description

weight

NDArray-or-Symbol.

Weight

grad

NDArray-or-Symbol.

Gradient

mom

NDArray-or-Symbol.

Momentum

lr

float, required.

Learning rate

momentum

float, optional, default=0.

The decay rate of momentum estimates at each epoch.

wd

float, optional, default=0.

Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.

rescale.grad

float, optional, default=1.

Rescale gradient to grad = rescale_grad*grad.

clip.gradient

float, optional, default=-1.

Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).