contrib.autograd

Autograd for NDArray.

Classes

TrainingStateScope(enter_state)

Scope for managing training state.

Functions

backward(outputs[, out_grads, retain_graph])

Compute the gradients of outputs w.r.t variables.

compute_gradient(outputs)

Deprecated.

grad(func[, argnum])

Return function that computes gradient of arguments.

grad_and_loss(func[, argnum])

Return function that computes both gradient of arguments and loss value.

mark_variables(variables, gradients[, grad_reqs])

Mark NDArrays as variables to compute gradient for autograd.

set_is_training(is_train)

Set status to training/not training.

test_section()

Returns a testing scope context to be used in ‘with’ statement and captures testing code.

train_section()

Returns a training scope context to be used in ‘with’ statement and captures training code.

class mxnet.contrib.autograd.TrainingStateScope(enter_state)[source]

Bases: object

Scope for managing training state.

Example::
with TrainingStateScope(True):

y = model(x) compute_gradient([y])

mxnet.contrib.autograd.backward(outputs, out_grads=None, retain_graph=False)[source]

Compute the gradients of outputs w.r.t variables.

Parameters
  • outputs (list of NDArray) –

  • out_grads (list of NDArray or None) –

mxnet.contrib.autograd.compute_gradient(outputs)[source]

Deprecated. Please use backward

mxnet.contrib.autograd.grad(func, argnum=None)[source]

Return function that computes gradient of arguments.

Parameters
  • func (a python function) – The forward (loss) function.

  • argnum (an int or a list of int) – The index of argument to calculate gradient for.

Returns

grad_func – A function that would compute the gradient of arguments.

Return type

a python function

Examples

>>> # autograd supports dynamic graph which is changed
>>> # every instance
>>> def func(x):
>>>     r = random.randint(0, 1)
>>>     if r % 2:
>>>         return x**2
>>>     else:
>>>         return x/3
>>> # use `grad(func)` to get the gradient function
>>> for x in range(10):
>>>     grad_func = grad(func)
>>>     inputs = nd.array([[1, 2, 3], [4, 5, 6]])
>>>     grad_vals = grad_func(inputs)
mxnet.contrib.autograd.grad_and_loss(func, argnum=None)[source]

Return function that computes both gradient of arguments and loss value.

Parameters
  • func (a python function) – The forward (loss) function.

  • argnum (an int or a list of int) – The index of argument to calculate gradient for.

Returns

grad_and_loss_func – A function that would compute both the gradient of arguments and loss value.

Return type

a python function

mxnet.contrib.autograd.mark_variables(variables, gradients, grad_reqs='write')[source]

Mark NDArrays as variables to compute gradient for autograd.

Parameters
  • variables (list of NDArray) –

  • gradients (list of NDArray) –

  • grad_reqs (list of string) –

mxnet.contrib.autograd.set_is_training(is_train)[source]

Set status to training/not training. When training, graph will be constructed for gradient computation. Operators will also run with ctx.is_train=True. For example, Dropout will drop inputs randomly when is_train=True while simply passing through if is_train=False.

Parameters

is_train (bool) –

Returns

Return type

previous state before this set.

mxnet.contrib.autograd.test_section()[source]

Returns a testing scope context to be used in ‘with’ statement and captures testing code.

Example::
with autograd.train_section():

y = model(x) compute_gradient([y]) with autograd.test_section():

# testing, IO, gradient updates…

mxnet.contrib.autograd.train_section()[source]

Returns a training scope context to be used in ‘with’ statement and captures training code.

Example::
with autograd.train_section():

y = model(x) compute_gradient([y])

metric.update(…) optim.step(…)