Custom Numpy Operators

In this tutorial, we will learn how to build custom operators with numpy in python. We will go through two examples: - Custom operator without any Parameters - Custom operator with Parameters

Custom operator in python is easy to develop and good for prototyping, but may hurt performance. If you find it to be a bottleneck, please consider moving to a C++ based implementation in the backend.

[1]:
import numpy as np
import mxnet as mx
from mxnet import gluon, autograd
import os
mx.npx.reset_np()

Parameter-less operators

This operator implements the standard sigmoid activation function. This is only for illustration purposes, in real life you would use the built-in operator mx.npx.relu.

Forward & backward implementation

First we implement the forward and backward computation by sub-classing mx.operator.CustomOp:

[2]:
class Sigmoid(mx.operator.CustomOp):
    def forward(self, is_train, req, in_data, out_data, aux):
        """Implements forward computation.

        is_train : bool, whether forwarding for training or testing.
        req : list of {'null', 'write', 'inplace', 'add'}, how to assign to out_data. 'null' means skip assignment, etc.
        in_data : list of NDArray, input data.
        out_data : list of NDArray, pre-allocated output buffers.
        aux : list of NDArray, mutable auxiliary states. Usually not used.
        """
        x = in_data[0].asnumpy()
        y = 1.0 / (1.0 + np.exp(-x))
        self.assign(out_data[0], req[0], mx.nd.array(y))

    def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
        """Implements backward computation

        req : list of {'null', 'write', 'inplace', 'add'}, how to assign to in_grad
        out_grad : list of NDArray, gradient w.r.t. output data.
        in_grad : list of NDArray, gradient w.r.t. input data. This is the output buffer.
        """
        y = out_data[0].asnumpy()
        dy = out_grad[0].asnumpy()
        dx = dy*(1.0 - y)*y
        self.assign(in_grad[0], req[0], mx.nd.array(dx))

Register custom operator

Then we need to register the custom op and describe it’s properties like input and output shapes so that mxnet can recognize it. This is done by sub-classing mx.operator.CustomOpProp:

[3]:
@mx.operator.register("sigmoid")  # register with name "sigmoid"
class SigmoidProp(mx.operator.CustomOpProp):
    def __init__(self):
        super(SigmoidProp, self).__init__(True)

    def list_arguments(self):
        #  this can be omitted if you only have 1 input.
        return ['data']

    def list_outputs(self):
        #  this can be omitted if you only have 1 output.
        return ['output']

    def infer_shape(self, in_shapes):
        """Calculate output shapes from input shapes. This can be
        omited if all your inputs and outputs have the same shape.

        in_shapes : list of shape. Shape is described by a tuple of int.
        """
        data_shape = in_shapes[0]
        output_shape = data_shape
        # return 3 lists representing inputs shapes, outputs shapes, and aux data shapes.
        return (data_shape,), (output_shape,), ()

    def create_operator(self, device, in_shapes, in_dtypes):
        #  create and return the CustomOp class.
        return Sigmoid()

Example Usage

We can now use this operator by calling mx.nd.Custom:

[4]:
x = mx.nd.array([0, 1, 2, 3])
# attach gradient buffer to x for autograd
x.attach_grad()
# forward in a record() section to save computation graph for backward
# see autograd tutorial to learn more.
with autograd.record():
    y = mx.nd.Custom(x, op_type='sigmoid')
print(y)

[0.5        0.7310586  0.880797   0.95257413]
<NDArray 4 @cpu(0)>
[03:51:46] /work/mxnet/src/storage/storage.cc:202: Using Pooled (Naive) StorageManager for CPU
[5]:
# call backward computation
y.backward()
# gradient is now saved to the grad buffer we attached previously
print(x.grad)

[0.25       0.19661193 0.10499363 0.04517666]
<NDArray 4 @cpu(0)>

Parametrized Operator

In the second use case we implement an operator with learnable weights. We implement the dense (or fully connected) layer that has one input, one output, and two learnable parameters: weight and bias.

The dense operator performs a dot product between data and weight, then add bias to it.

Forward & backward implementation

[6]:
class Dense(mx.operator.CustomOp):
    def __init__(self, bias):
        self._bias = bias

    def forward(self, is_train, req, in_data, out_data, aux):
        x = in_data[0].asnumpy()
        weight = in_data[1].asnumpy()
        y = x.dot(weight.T) + self._bias
        self.assign(out_data[0], req[0], mx.nd.array(y))

    def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
        x = in_data[0].asnumpy()
        dy = out_grad[0].asnumpy()
        dx = dy.T.dot(x)
        self.assign(in_grad[0], req[0], mx.nd.array(dx))

Registration

[7]:
@mx.operator.register("dense")  # register with name "sigmoid"
class DenseProp(mx.operator.CustomOpProp):
    def __init__(self, bias):
        super(DenseProp, self).__init__(True)
        # we use constant bias here to illustrate how to pass arguments
        # to operators. All arguments are in string format so you need
        # to convert them back to the type you want.
        self._bias = float(bias)

    def list_arguments(self):
        return ['data', 'weight']

    def list_outputs(self):
        #  this can be omitted if you only have 1 output.
        return ['output']

    def infer_shape(self, in_shapes):
        data_shape = in_shapes[0]
        weight_shape = in_shapes[1]
        output_shape = (data_shape[0], weight_shape[0])
        # return 3 lists representing inputs shapes, outputs shapes, and aux data shapes.
        return (data_shape, weight_shape), (output_shape,), ()

    def create_operator(self, device, in_shapes, in_dtypes):
        #  create and return the CustomOp class.
        return Dense(self._bias)

Use CustomOp together with Block

Parameterized CustomOp are usually used together with Blocks, which holds the parameter.

[8]:
class DenseBlock(mx.gluon.Block):
    def __init__(self, in_channels, channels, bias, **kwargs):
        super(DenseBlock, self).__init__(**kwargs)
        self._bias = bias
        self.weight = gluon.Parameter('weight', shape=(channels, in_channels))

    def forward(self, x):
        device = x.device
        return mx.nd.Custom(x, self.weight.data(device), bias=self._bias, op_type='dense')

Example usage

[9]:
dense = DenseBlock(3, 5, 0.1)
dense.initialize()
x = mx.nd.uniform(shape=(4, 3))
y = dense(x)
print(y)

[[0.12017269 0.05653428 0.1508463  0.07967146 0.12003046]
 [0.12944445 0.07254814 0.13926125 0.06505167 0.10894868]
 [0.09446907 0.03709245 0.15341945 0.12280548 0.13960774]
 [0.10131115 0.06879345 0.1240032  0.11093724 0.12107792]]
<NDArray 4x5 @cpu(0)>

Using custom operators with fork

In Linux systems, the default method in multiprocessing to create process is by using fork. If there are unfinished async custom operations when forking, the program will be blocked because of python GIL. Always use sync calls like wait_to_read or waitall before calling fork.

x = mx.nd.array([0, 1, 2, 3])
y = mx.nd.Custom(x, op_type='sigmoid')
# unfinished async sigmoid operation will cause blocking
os.fork()

Correctly handling this will make mxnet depend upon libpython, so the workaround now is to ensure that all custom operations are executed before forking process.

x = mx.nd.array([0, 1, 2, 3])
y = mx.nd.Custom(x, op_type='sigmoid')
# force execution by reading y
print(y.asnumpy())
os.fork()