Sparse NDArray API

Overview

This document lists the routines of the n-dimensional sparse array package:

mxnet.ndarray.sparse Sparse NDArray API of MXNet.

The CSRNDArray and RowSparseNDArray API, defined in the ndarray.sparse package, provides imperative sparse tensor operations.

An CSRNDArray inherits from NDArray, and represents a two-dimensional, fixed-size array in compressed sparse row format.

>>> x = mx.nd.array([[1, 0], [0, 0], [2, 3]])
>>> csr = x.tostype('csr')
>>> type(csr)

>>> csr.shape
(3, 2)
>>> csr.data.asnumpy()
array([ 1.  2.  3.], dtype=float32)
>>> csr.indices.asnumpy()
array([0, 0, 1])
>>> csr.indptr.asnumpy()
array([0, 1, 1, 3])
>>> csr.stype
'csr'

A detailed tutorial is available at CSRNDArray - NDArray in Compressed Sparse Row Storage Format.

An RowSparseNDArray inherits from NDArray, and represents a multi-dimensional, fixed-size array in row sparse format.

>>> x = mx.nd.array([[1, 0], [0, 0], [2, 3]])
>>> row_sparse = x.tostype('row_sparse')
>>> type(row_sparse)

>>> row_sparse.data.asnumpy()
array([[ 1.  0.],
       [ 2.  3.]], dtype=float32)
>>> row_sparse.indices.asnumpy()
array([0, 2])
>>> row_sparse.stype
'row_sparse'

A detailed tutorial is available at RowSparseNDArray - NDArray for Sparse Gradient Updates.

Note

mxnet.ndarray.sparse is similar to mxnet.ndarray in some aspects. But the differences are not negligible. For instance:

  • Only a subset of operators in mxnet.ndarray have efficient sparse implementations in mxnet.ndarray.sparse.
  • If an operator do not occur in the mxnet.ndarray.sparse namespace, that means the operator does not have an efficient sparse implementation yet. If sparse inputs are passed to such an operator, it will convert inputs to the dense format and fallback to the already available dense implementation.
  • The storage types (stype) of sparse operators’ outputs depend on the storage types of inputs. By default the operators not available in mxnet.ndarray.sparse infer “default” (dense) storage type for outputs. Please refer to the [API Reference](#api-reference) section for further details on specific operators.

Note

mxnet.ndarray.sparse.CSRNDArray is similar to scipy.sparse.csr_matrix in some aspects. But they differ in a few aspects:

  • In MXNet the column indices (CSRNDArray.indices) for a given row are expected to be sorted in ascending order. Duplicate column entries for the same row are not allowed.
  • CSRNDArray.data, CSRNDArray.indices and CSRNDArray.indptr always create deep copies, while it’s not the case in scipy.sparse.csr_matrix.

In the rest of this document, we first overview the methods provided by the ndarray.sparse.CSRNDArray class and the ndarray.sparse.RowSparseNDArray class, and then list other routines provided by the ndarray.sparse package.

The ndarray.sparse package provides several classes:

CSRNDArray A sparse representation of 2D NDArray in the Compressed Sparse Row format.
RowSparseNDArray A sparse representation of a set of NDArray row slices at given indices.

We summarize the interface for each class in the following sections.

The CSRNDArray class

Array attributes

CSRNDArray.shape Tuple of array dimensions.
CSRNDArray.context Device context of the array.
CSRNDArray.dtype Data-type of the array’s elements.
CSRNDArray.stype Storage-type of the array.
CSRNDArray.data A deep copy NDArray of the data array of the CSRNDArray.
CSRNDArray.indices A deep copy NDArray of the indices array of the CSRNDArray.
CSRNDArray.indptr A deep copy NDArray of the indptr array of the CSRNDArray.

Array conversion

CSRNDArray.copy Makes a copy of this NDArray, keeping the same context.
CSRNDArray.copyto Copies the value of this array to another array.
CSRNDArray.as_in_context Returns an array on the target device with the same value as this array.
CSRNDArray.asscipy Returns a scipy.sparse.csr.csr_matrix object with value copied from this array
CSRNDArray.asnumpy Return a dense numpy.ndarray object with value copied from this array
CSRNDArray.asscalar Returns a scalar whose value is copied from this array.
CSRNDArray.astype Return a copy of the array after casting to a specified type.
CSRNDArray.tostype Return a copy of the array with chosen storage type.

Array inspection

CSRNDArray.check_format Check whether the NDArray format is valid.

Array creation

CSRNDArray.zeros_like Convenience fluent method for zeros_like().

Array reduction

CSRNDArray.sum Convenience fluent method for sum().
CSRNDArray.mean Convenience fluent method for mean().
CSRNDArray.norm Convenience fluent method for norm().

Array rounding

CSRNDArray.round Convenience fluent method for round().
CSRNDArray.rint Convenience fluent method for rint().
CSRNDArray.fix Convenience fluent method for fix().
CSRNDArray.floor Convenience fluent method for floor().
CSRNDArray.ceil Convenience fluent method for ceil().
CSRNDArray.trunc Convenience fluent method for trunc().

Trigonometric functions

CSRNDArray.sin Convenience fluent method for sin().
CSRNDArray.tan Convenience fluent method for tan().
CSRNDArray.arcsin Convenience fluent method for arcsin().
CSRNDArray.arctan Convenience fluent method for arctan().
CSRNDArray.degrees Convenience fluent method for degrees().
CSRNDArray.radians Convenience fluent method for radians().

Hyperbolic functions

CSRNDArray.sinh Convenience fluent method for sinh().
CSRNDArray.tanh Convenience fluent method for tanh().
CSRNDArray.arcsinh Convenience fluent method for arcsinh().
CSRNDArray.arctanh Convenience fluent method for arctanh().

Exponents and logarithms

CSRNDArray.expm1 Convenience fluent method for expm1().
CSRNDArray.log1p Convenience fluent method for log1p().

Powers

CSRNDArray.sqrt Convenience fluent method for sqrt().
CSRNDArray.square Convenience fluent method for square().

Joining arrays

concat Joins input arrays along a given axis.

Indexing

CSRNDArray.__getitem__ x.__getitem__(i) <=> x[i]
CSRNDArray.__setitem__ x.__setitem__(i, y) <=> x[i]=y
CSRNDArray.slice Convenience fluent method for slice().

Miscellaneous

CSRNDArray.abs Convenience fluent method for abs().
CSRNDArray.clip Convenience fluent method for clip().
CSRNDArray.sign Convenience fluent method for sign().

Lazy evaluation

CSRNDArray.wait_to_read Waits until all previous write operations on the current array are finished.

The RowSparseNDArray class

Array attributes

RowSparseNDArray.shape Tuple of array dimensions.
RowSparseNDArray.context Device context of the array.
RowSparseNDArray.dtype Data-type of the array’s elements.
RowSparseNDArray.stype Storage-type of the array.
RowSparseNDArray.data A deep copy NDArray of the data array of the RowSparseNDArray.
RowSparseNDArray.indices A deep copy NDArray of the indices array of the RowSparseNDArray.

Array conversion

RowSparseNDArray.copy Makes a copy of this NDArray, keeping the same context.
RowSparseNDArray.copyto Copies the value of this array to another array.
RowSparseNDArray.as_in_context Returns an array on the target device with the same value as this array.
RowSparseNDArray.asnumpy Return a dense numpy.ndarray object with value copied from this array
RowSparseNDArray.asscalar Returns a scalar whose value is copied from this array.
RowSparseNDArray.astype Return a copy of the array after casting to a specified type.
RowSparseNDArray.tostype Return a copy of the array with chosen storage type.

Array inspection

RowSparseNDArray.check_format Check whether the NDArray format is valid.

Array creation

RowSparseNDArray.zeros_like Convenience fluent method for zeros_like().

Array reduction

RowSparseNDArray.norm Convenience fluent method for norm().

Array rounding

RowSparseNDArray.round Convenience fluent method for round().
RowSparseNDArray.rint Convenience fluent method for rint().
RowSparseNDArray.fix Convenience fluent method for fix().
RowSparseNDArray.floor Convenience fluent method for floor().
RowSparseNDArray.ceil Convenience fluent method for ceil().
RowSparseNDArray.trunc Convenience fluent method for trunc().

Trigonometric functions

RowSparseNDArray.sin Convenience fluent method for sin().
RowSparseNDArray.tan Convenience fluent method for tan().
RowSparseNDArray.arcsin Convenience fluent method for arcsin().
RowSparseNDArray.arctan Convenience fluent method for arctan().
RowSparseNDArray.degrees Convenience fluent method for degrees().
RowSparseNDArray.radians Convenience fluent method for radians().

Hyperbolic functions

RowSparseNDArray.sinh Convenience fluent method for sinh().
RowSparseNDArray.tanh Convenience fluent method for tanh().
RowSparseNDArray.arcsinh Convenience fluent method for arcsinh().
RowSparseNDArray.arctanh Convenience fluent method for arctanh().

Exponents and logarithms

RowSparseNDArray.expm1 Convenience fluent method for expm1().
RowSparseNDArray.log1p Convenience fluent method for log1p().

Powers

RowSparseNDArray.sqrt Convenience fluent method for sqrt().
RowSparseNDArray.square Convenience fluent method for square().

Indexing

RowSparseNDArray.__getitem__ x.__getitem__(i) <=> x[i]
RowSparseNDArray.__setitem__ x.__setitem__(i, y) <=> x[i]=y
RowSparseNDArray.retain Convenience fluent method for retain().

Lazy evaluation

RowSparseNDArray.wait_to_read Waits until all previous write operations on the current array are finished.

Miscellaneous

RowSparseNDArray.abs Convenience fluent method for abs().
RowSparseNDArray.clip Convenience fluent method for clip().
RowSparseNDArray.sign Convenience fluent method for sign().

Array creation routines

array Creates a sparse array from any object exposing the array interface.
empty Returns a new array of given shape and type, without initializing entries.
zeros Return a new array of given shape and type, filled with zeros.
zeros_like Return an array of zeros with the same shape, type and storage type as the input array.
csr_matrix Creates a CSRNDArray, an 2D array with compressed sparse row (CSR) format.
row_sparse_array Creates a RowSparseNDArray, a multidimensional row sparse array with a set of tensor slices at given indices.
mxnet.ndarray.load Loads an array from file.
mxnet.ndarray.save Saves a list of arrays or a dict of str->array to file.

Array manipulation routines

Changing array storage type

cast_storage Casts tensor storage type to the new type.

Indexing routines

slice Slices a region of the array.
retain pick rows specified by user input index array from a row sparse matrix
where Return the elements, either from x or y, depending on the condition.

Mathematical functions

Arithmetic operations

elemwise_add Adds arguments element-wise.
elemwise_sub Subtracts arguments element-wise.
elemwise_mul Multiplies arguments element-wise.
broadcast_add Returns element-wise sum of the input arrays with broadcasting.
broadcast_sub Returns element-wise difference of the input arrays with broadcasting.
broadcast_mul Returns element-wise product of the input arrays with broadcasting.
broadcast_div Returns element-wise division of the input arrays with broadcasting.
negative Numerical negative of the argument, element-wise.
dot Dot product of two arrays.
add_n Adds all input arguments element-wise.

Trigonometric functions

sin Computes the element-wise sine of the input array.
tan Computes the element-wise tangent of the input array.
arcsin Returns element-wise inverse sine of the input array.
arctan Returns element-wise inverse tangent of the input array.
degrees Converts each element of the input array from radians to degrees.
radians Converts each element of the input array from degrees to radians.

Hyperbolic functions

sinh Returns the hyperbolic sine of the input array, computed element-wise.
tanh Returns the hyperbolic tangent of the input array, computed element-wise.
arcsinh Returns the element-wise inverse hyperbolic sine of the input array, computed element-wise.
arctanh Returns the element-wise inverse hyperbolic tangent of the input array, computed element-wise.

Reduce functions

sum Computes the sum of array elements over given axes.
mean Computes the mean of array elements over given axes.
norm Computes the norm on an NDArray.

Rounding

round Returns element-wise rounded value to the nearest integer of the input.
rint Returns element-wise rounded value to the nearest integer of the input.
fix Returns element-wise rounded value to the nearest integer towards zero of the input.
floor Returns element-wise floor of the input.
ceil Returns element-wise ceiling of the input.
trunc Return the element-wise truncated value of the input.

Exponents and logarithms

expm1 Returns exp(x) - 1 computed element-wise on the input.
log1p Returns element-wise log(1 + x) value of the input.

Powers

sqrt Returns element-wise square-root value of the input.
square Returns element-wise squared value of the input.

Miscellaneous

abs Returns element-wise absolute value of the input.
sign Returns element-wise sign of the input.

Neural network

Updater

sgd_update Update function for Stochastic Gradient Descent (SDG) optimizer.
sgd_mom_update Momentum update function for Stochastic Gradient Descent (SGD) optimizer.
adam_update Update function for Adam optimizer.
adagrad_update Update function for AdaGrad optimizer.

More

make_loss Make your own loss function in network construction.
stop_gradient Stops gradient computation.
Embedding Maps integer indices to vector representations (embeddings).
LinearRegressionOutput Computes and optimizes for squared loss during backward propagation.
LogisticRegressionOutput Applies a logistic function to the input.

API Reference

class mxnet.ndarray.sparse.CSRNDArray(handle, writable=True)[source]

A sparse representation of 2D NDArray in the Compressed Sparse Row format.

A CSRNDArray represents an NDArray as three separate arrays: data, indptr and indices. It uses the CSR representation where the column indices for row i are stored in indices[indptr[i]:indptr[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]].

The column indices for a given row are expected to be sorted in ascending order. Duplicate column entries for the same row are not allowed.

Example

>>> a = mx.nd.array([[0, 1, 0], [2, 0, 0], [0, 0, 0], [0, 0, 3]])
>>> a = a.tostype('csr')
>>> a.data.asnumpy()
array([ 1.,  2.,  3.], dtype=float32)
>>> a.indices.asnumpy()
array([1, 0, 2])
>>> a.indptr.asnumpy()
array([0, 1, 2, 2, 3])

See also

csr_matrix
Several ways to construct a CSRNDArray
__getitem__(key)[source]

x.__getitem__(i) <=> x[i]

Returns a newly created NDArray based on the indexing key.

Parameters:key (int or mxnet.ndarray.NDArray.slice) – Indexing key.

Examples

>>> indptr = np.array([0, 2, 3, 6])
>>> indices = np.array([0, 2, 2, 0, 1, 2])
>>> data = np.array([1, 2, 3, 4, 5, 6])
>>> a = mx.nd.sparse.csr_matrix((data, indices, indptr), shape=(3, 3))
>>> a.asnumpy()
array([[ 1.,  0.,  2.],
       [ 0.,  0.,  3.],
       [ 4.,  5.,  6.]], dtype=float32)
>>> a[1:2].asnumpy()
array([[ 0.,  0.,  3.]], dtype=float32)
>>> a[1].asnumpy()
array([[ 0.,  0.,  3.]], dtype=float32)
>>> a[-1].asnumpy()
array([[ 4.,  5.,  6.]], dtype=float32)
__setitem__(key, value)[source]

x.__setitem__(i, y) <=> x[i]=y

Set self[key] to value. Only slice key [:] is supported.

Parameters:

Examples

>>> src = mx.nd.sparse.zeros('csr', (3,3))
>>> src.asnumpy()
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)
>>> # assign CSRNDArray with same storage type
>>> x = mx.nd.ones((3,3)).tostype('csr')
>>> x[:] = src
>>> x.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> # assign NDArray to CSRNDArray
>>> x[:] = mx.nd.ones((3,3)) * 2
>>> x.asnumpy()
array([[ 2.,  2.,  2.],
       [ 2.,  2.,  2.],
       [ 2.,  2.,  2.]], dtype=float32)
indices

A deep copy NDArray of the indices array of the CSRNDArray. This generates a deep copy of the column indices of the current csr matrix.

Returns:This CSRNDArray’s indices array.
Return type:NDArray
indptr

A deep copy NDArray of the indptr array of the CSRNDArray. This generates a deep copy of the indptr of the current csr matrix.

Returns:This CSRNDArray’s indptr array.
Return type:NDArray
data

A deep copy NDArray of the data array of the CSRNDArray. This generates a deep copy of the data of the current csr matrix.

Returns:This CSRNDArray’s data array.
Return type:NDArray
tostype(stype)[source]

Return a copy of the array with chosen storage type.

Returns:A copy of the array with the chosen storage stype
Return type:NDArray or CSRNDArray
copyto(other)[source]

Copies the value of this array to another array.

If other is a NDArray or CSRNDArray object, then other.shape and self.shape should be the same. This function copies the value from self to other.

If other is a context, a new CSRNDArray will be first created on the target context, and the value of self is copied.

Parameters:other (NDArray or CSRNDArray or Context) – The destination array or context.
Returns:The copied array. If other is an NDArray or CSRNDArray, then the return value and other will point to the same NDArray or CSRNDArray.
Return type:NDArray or CSRNDArray
asscipy()[source]

Returns a scipy.sparse.csr.csr_matrix object with value copied from this array

Examples

>>> x = mx.nd.sparse.zeros('csr', (2,3))
>>> y = x.asscipy()
>>> type(y)

>>> y
<2x3 sparse matrix of type ''
with 0 stored elements in Compressed Sparse Row format>
__neg__()

x.__neg__(y) <=> -x

abs(*args, **kwargs)

Convenience fluent method for abs().

The arguments are the same as for abs(), with this array as data.

arcsin(*args, **kwargs)

Convenience fluent method for arcsin().

The arguments are the same as for arcsin(), with this array as data.

arcsinh(*args, **kwargs)

Convenience fluent method for arcsinh().

The arguments are the same as for arcsinh(), with this array as data.

arctan(*args, **kwargs)

Convenience fluent method for arctan().

The arguments are the same as for arctan(), with this array as data.

arctanh(*args, **kwargs)

Convenience fluent method for arctanh().

The arguments are the same as for arctanh(), with this array as data.

as_in_context(context)

Returns an array on the target device with the same value as this array.

If the target context is the same as self.context, then self is returned. Otherwise, a copy is made.

Parameters:context (Context) – The target context.
Returns:The target array.
Return type:NDArray, CSRNDArray or RowSparseNDArray

Examples

>>> x = mx.nd.ones((2,3))
>>> y = x.as_in_context(mx.cpu())
>>> y is x
True
>>> z = x.as_in_context(mx.gpu(0))
>>> z is x
False
asnumpy()

Return a dense numpy.ndarray object with value copied from this array

asscalar()

Returns a scalar whose value is copied from this array.

This function is equivalent to self.asnumpy()[0]. This NDArray must have shape (1,).

Examples

>>> x = mx.nd.ones((1,), dtype='int32')
>>> x.asscalar()
1
>>> type(x.asscalar())

astype(dtype, copy=True)

Return a copy of the array after casting to a specified type.

Parameters:
  • dtype (numpy.dtype or str) – The type of the returned array.
  • copy (bool) – Default True. By default, astype always returns a newly allocated ndarray on the same context. If this is set to False, and the dtype requested is the same as the ndarray’s dtype, the ndarray is returned instead of a copy.

Examples

>>> x = mx.nd.sparse.zeros('row_sparse', (2,3), dtype='float32')
>>> y = x.astype('int32')
>>> y.dtype

ceil(*args, **kwargs)

Convenience fluent method for ceil().

The arguments are the same as for ceil(), with this array as data.

check_format(full_check=True)

Check whether the NDArray format is valid.

Parameters:full_check (bool, optional) – If True, rigorous check, O(N) operations. Otherwise basic check, O(1) operations (default True).
clip(*args, **kwargs)

Convenience fluent method for clip().

The arguments are the same as for clip(), with this array as data.

context

Device context of the array.

Examples

>>> x = mx.nd.array([1, 2, 3, 4])
>>> x.context
cpu(0)
>>> type(x.context)

>>> y = mx.nd.zeros((2,3), mx.gpu(0))
>>> y.context
gpu(0)
copy()

Makes a copy of this NDArray, keeping the same context.

Returns:The copied array
Return type:NDArray, CSRNDArray or RowSparseNDArray

Examples

>>> x = mx.nd.ones((2,3))
>>> y = x.copy()
>>> y.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
degrees(*args, **kwargs)

Convenience fluent method for degrees().

The arguments are the same as for degrees(), with this array as data.

dtype

Data-type of the array’s elements.

Returns:This NDArray’s data type.
Return type:numpy.dtype

Examples

>>> x = mx.nd.zeros((2,3))
>>> x.dtype

>>> y = mx.nd.zeros((2,3), dtype='int32')
>>> y.dtype

expm1(*args, **kwargs)

Convenience fluent method for expm1().

The arguments are the same as for expm1(), with this array as data.

fix(*args, **kwargs)

Convenience fluent method for fix().

The arguments are the same as for fix(), with this array as data.

floor(*args, **kwargs)

Convenience fluent method for floor().

The arguments are the same as for floor(), with this array as data.

log1p(*args, **kwargs)

Convenience fluent method for log1p().

The arguments are the same as for log1p(), with this array as data.

mean(*args, **kwargs)

Convenience fluent method for mean().

The arguments are the same as for mean(), with this array as data.

norm(*args, **kwargs)

Convenience fluent method for norm().

The arguments are the same as for norm(), with this array as data.

radians(*args, **kwargs)

Convenience fluent method for radians().

The arguments are the same as for radians(), with this array as data.

rint(*args, **kwargs)

Convenience fluent method for rint().

The arguments are the same as for rint(), with this array as data.

round(*args, **kwargs)

Convenience fluent method for round().

The arguments are the same as for round(), with this array as data.

shape

Tuple of array dimensions.

Examples

>>> x = mx.nd.array([1, 2, 3, 4])
>>> x.shape
(4L,)
>>> y = mx.nd.zeros((2, 3, 4))
>>> y.shape
(2L, 3L, 4L)
sign(*args, **kwargs)

Convenience fluent method for sign().

The arguments are the same as for sign(), with this array as data.

sin(*args, **kwargs)

Convenience fluent method for sin().

The arguments are the same as for sin(), with this array as data.

sinh(*args, **kwargs)

Convenience fluent method for sinh().

The arguments are the same as for sinh(), with this array as data.

slice(*args, **kwargs)

Convenience fluent method for slice().

The arguments are the same as for slice(), with this array as data.

sqrt(*args, **kwargs)

Convenience fluent method for sqrt().

The arguments are the same as for sqrt(), with this array as data.

square(*args, **kwargs)

Convenience fluent method for square().

The arguments are the same as for square(), with this array as data.

square(*args, **kwargs)

Convenience fluent method for square().

The arguments are the same as for square(), with this array as data.

stype

Storage-type of the array.

sum(*args, **kwargs)

Convenience fluent method for sum().

The arguments are the same as for sum(), with this array as data.

tan(*args, **kwargs)

Convenience fluent method for tan().

The arguments are the same as for tan(), with this array as data.

tanh(*args, **kwargs)

Convenience fluent method for tanh().

The arguments are the same as for tanh(), with this array as data.

trunc(*args, **kwargs)

Convenience fluent method for trunc().

The arguments are the same as for trunc(), with this array as data.

wait_to_read()

Waits until all previous write operations on the current array are finished.

This method guarantees that all previous write operations that pushed into the backend engine for execution are actually finished.

Examples

>>> import time
>>> tic = time.time()
>>> a = mx.nd.ones((1000,1000))
>>> b = mx.nd.dot(a, a)
>>> print(time.time() - tic) 
0.003854036331176758
>>> b.wait_to_read()
>>> print(time.time() - tic) 
0.0893700122833252
zeros_like(*args, **kwargs)

Convenience fluent method for zeros_like().

The arguments are the same as for zeros_like(), with this array as data.

class mxnet.ndarray.sparse.RowSparseNDArray(handle, writable=True)[source]

A sparse representation of a set of NDArray row slices at given indices.

A RowSparseNDArray represents a multidimensional NDArray using two separate arrays: data and indices. The number of dimensions has to be at least 2.

  • data: an NDArray of any dtype with shape [D0, D1, ..., Dn].
  • indices: a 1-D int64 NDArray with shape [D0] with values sorted in ascending order.

The indices stores the indices of the row slices with non-zeros, while the values are stored in data. The corresponding NDArray dense represented by RowSparseNDArray rsp has

dense[rsp.indices[i], :, :, :, ...] = rsp.data[i, :, :, :, ...]

>>> dense.asnumpy()
array([[ 1.,  2., 3.],
       [ 0.,  0., 0.],
       [ 4.,  0., 5.],
       [ 0.,  0., 0.],
       [ 0.,  0., 0.]], dtype=float32)
>>> rsp = dense.tostype('row_sparse')
>>> rsp.indices.asnumpy()
array([0, 2], dtype=int64)
>>> rsp.data.asnumpy()
array([[ 1.,  2., 3.],
       [ 4.,  0., 5.]], dtype=float32)

A RowSparseNDArray is typically used to represent non-zero row slices of a large NDArray of shape [LARGE0, D1, .. , Dn] where LARGE0 >> D0 and most row slices are zeros.

RowSparseNDArray is used principally in the definition of gradients for operations that have sparse gradients (e.g. sparse dot and sparse embedding).

See also

row_sparse_array
Several ways to construct a RowSparseNDArray
__getitem__(key)[source]

x.__getitem__(i) <=> x[i]

Returns a sliced view of this array.

Parameters:key (mxnet.ndarray.NDArray.slice) – Indexing key.

Examples

>>> x = mx.nd.sparse.zeros('row_sparse', (2, 3))
>>> x[:].asnumpy()
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)
__setitem__(key, value)[source]

x.__setitem__(i, y) <=> x[i]=y

Set self[key] to value. Only slice key [:] is supported.

Parameters:

Examples

>>> src = mx.nd.row_sparse([[1, 0, 2], [4, 5, 6]], [0, 2], (3,3))
>>> src.asnumpy()
array([[ 1.,  0.,  2.],
       [ 0.,  0.,  0.],
       [ 4.,  5.,  6.]], dtype=float32)
>>> # assign RowSparseNDArray with same storage type
>>> x = mx.nd.sparse.zeros('row_sparse', (3,3))
>>> x[:] = src
>>> x.asnumpy()
array([[ 1.,  0.,  2.],
       [ 0.,  0.,  0.],
       [ 4.,  5.,  6.]], dtype=float32)
>>> # assign NDArray to RowSparseNDArray
>>> x[:] = mx.nd.ones((3,3))
>>> x.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
indices

A deep copy NDArray of the indices array of the RowSparseNDArray. This generates a deep copy of the row indices of the current row_sparse matrix.

Returns:This RowSparseNDArray’s indices array.
Return type:NDArray
data

A deep copy NDArray of the data array of the RowSparseNDArray. This generates a deep copy of the data of the current row_sparse matrix.

Returns:This RowSparseNDArray’s data array.
Return type:NDArray
tostype(stype)[source]

Return a copy of the array with chosen storage type.

Returns:A copy of the array with the chosen storage stype
Return type:NDArray or RowSparseNDArray
copyto(other)[source]

Copies the value of this array to another array.

If other is a NDArray or RowSparseNDArray object, then other.shape and self.shape should be the same. This function copies the value from self to other.

If other is a context, a new RowSparseNDArray will be first created on the target context, and the value of self is copied.

Parameters:other (NDArray or RowSparseNDArray or Context) – The destination array or context.
Returns:The copied array. If other is an NDArray or RowSparseNDArray, then the return value and other will point to the same NDArray or RowSparseNDArray.
Return type:NDArray or RowSparseNDArray
retain(*args, **kwargs)[source]

Convenience fluent method for retain().

The arguments are the same as for retain(), with this array as data.

abs(*args, **kwargs)

Convenience fluent method for abs().

The arguments are the same as for abs(), with this array as data.

arcsin(*args, **kwargs)

Convenience fluent method for arcsin().

The arguments are the same as for arcsin(), with this array as data.

arcsinh(*args, **kwargs)

Convenience fluent method for arcsinh().

The arguments are the same as for arcsinh(), with this array as data.

arctan(*args, **kwargs)

Convenience fluent method for arctan().

The arguments are the same as for arctan(), with this array as data.

arctanh(*args, **kwargs)

Convenience fluent method for arctanh().

The arguments are the same as for arctanh(), with this array as data.

as_in_context(context)

Returns an array on the target device with the same value as this array.

If the target context is the same as self.context, then self is returned. Otherwise, a copy is made.

Parameters:context (Context) – The target context.
Returns:The target array.
Return type:NDArray, CSRNDArray or RowSparseNDArray

Examples

>>> x = mx.nd.ones((2,3))
>>> y = x.as_in_context(mx.cpu())
>>> y is x
True
>>> z = x.as_in_context(mx.gpu(0))
>>> z is x
False
asnumpy()

Return a dense numpy.ndarray object with value copied from this array

asscalar()

Returns a scalar whose value is copied from this array.

This function is equivalent to self.asnumpy()[0]. This NDArray must have shape (1,).

Examples

>>> x = mx.nd.ones((1,), dtype='int32')
>>> x.asscalar()
1
>>> type(x.asscalar())

astype(dtype, copy=True)

Return a copy of the array after casting to a specified type.

Parameters:
  • dtype (numpy.dtype or str) – The type of the returned array.
  • copy (bool) – Default True. By default, astype always returns a newly allocated ndarray on the same context. If this is set to False, and the dtype requested is the same as the ndarray’s dtype, the ndarray is returned instead of a copy.

Examples

>>> x = mx.nd.sparse.zeros('row_sparse', (2,3), dtype='float32')
>>> y = x.astype('int32')
>>> y.dtype

ceil(*args, **kwargs)

Convenience fluent method for ceil().

The arguments are the same as for ceil(), with this array as data.

check_format(full_check=True)

Check whether the NDArray format is valid.

Parameters:full_check (bool, optional) – If True, rigorous check, O(N) operations. Otherwise basic check, O(1) operations (default True).
clip(*args, **kwargs)

Convenience fluent method for clip().

The arguments are the same as for clip(), with this array as data.

context

Device context of the array.

Examples

>>> x = mx.nd.array([1, 2, 3, 4])
>>> x.context
cpu(0)
>>> type(x.context)

>>> y = mx.nd.zeros((2,3), mx.gpu(0))
>>> y.context
gpu(0)
copy()

Makes a copy of this NDArray, keeping the same context.

Returns:The copied array
Return type:NDArray, CSRNDArray or RowSparseNDArray

Examples

>>> x = mx.nd.ones((2,3))
>>> y = x.copy()
>>> y.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
degrees(*args, **kwargs)

Convenience fluent method for degrees().

The arguments are the same as for degrees(), with this array as data.

dtype

Data-type of the array’s elements.

Returns:This NDArray’s data type.
Return type:numpy.dtype

Examples

>>> x = mx.nd.zeros((2,3))
>>> x.dtype

>>> y = mx.nd.zeros((2,3), dtype='int32')
>>> y.dtype

expm1(*args, **kwargs)

Convenience fluent method for expm1().

The arguments are the same as for expm1(), with this array as data.

fix(*args, **kwargs)

Convenience fluent method for fix().

The arguments are the same as for fix(), with this array as data.

floor(*args, **kwargs)

Convenience fluent method for floor().

The arguments are the same as for floor(), with this array as data.

log1p(*args, **kwargs)

Convenience fluent method for log1p().

The arguments are the same as for log1p(), with this array as data.

norm(*args, **kwargs)

Convenience fluent method for norm().

The arguments are the same as for norm(), with this array as data.

radians(*args, **kwargs)

Convenience fluent method for radians().

The arguments are the same as for radians(), with this array as data.

rint(*args, **kwargs)

Convenience fluent method for rint().

The arguments are the same as for rint(), with this array as data.

round(*args, **kwargs)

Convenience fluent method for round().

The arguments are the same as for round(), with this array as data.

shape

Tuple of array dimensions.

Examples

>>> x = mx.nd.array([1, 2, 3, 4])
>>> x.shape
(4L,)
>>> y = mx.nd.zeros((2, 3, 4))
>>> y.shape
(2L, 3L, 4L)
sign(*args, **kwargs)

Convenience fluent method for sign().

The arguments are the same as for sign(), with this array as data.

sin(*args, **kwargs)

Convenience fluent method for sin().

The arguments are the same as for sin(), with this array as data.

sinh(*args, **kwargs)

Convenience fluent method for sinh().

The arguments are the same as for sinh(), with this array as data.

sqrt(*args, **kwargs)

Convenience fluent method for sqrt().

The arguments are the same as for sqrt(), with this array as data.

square(*args, **kwargs)

Convenience fluent method for square().

The arguments are the same as for square(), with this array as data.

stype

Storage-type of the array.

tan(*args, **kwargs)

Convenience fluent method for tan().

The arguments are the same as for tan(), with this array as data.

tanh(*args, **kwargs)

Convenience fluent method for tanh().

The arguments are the same as for tanh(), with this array as data.

trunc(*args, **kwargs)

Convenience fluent method for trunc().

The arguments are the same as for trunc(), with this array as data.

wait_to_read()

Waits until all previous write operations on the current array are finished.

This method guarantees that all previous write operations that pushed into the backend engine for execution are actually finished.

Examples

>>> import time
>>> tic = time.time()
>>> a = mx.nd.ones((1000,1000))
>>> b = mx.nd.dot(a, a)
>>> print(time.time() - tic) 
0.003854036331176758
>>> b.wait_to_read()
>>> print(time.time() - tic) 
0.0893700122833252
zeros_like(*args, **kwargs)

Convenience fluent method for zeros_like().

The arguments are the same as for zeros_like(), with this array as data.

Sparse NDArray API of MXNet.

mxnet.ndarray.sparse.csr_matrix(arg1, shape=None, ctx=None, dtype=None)[source]

Creates a CSRNDArray, an 2D array with compressed sparse row (CSR) format.

The CSRNDArray can be instantiated in several ways:

  • csr_matrix(D):
    to construct a CSRNDArray with a dense 2D array D
    • D (array_like) - An object exposing the array interface, an object whose __array__ method returns an array, or any (nested) sequence.
    • ctx (Context, optional) - Device context (default is the current default context).
    • dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is D.dtype if D is an NDArray or numpy.ndarray, float32 otherwise.
  • csr_matrix(S)
    to construct a CSRNDArray with a sparse 2D array S
    • S (CSRNDArray or scipy.sparse.csr.csr_matrix) - A sparse matrix.
    • ctx (Context, optional) - Device context (default is the current default context).
    • dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is S.dtype.
  • csr_matrix((M, N))
    to construct an empty CSRNDArray with shape (M, N)
    • M (int) - Number of rows in the matrix
    • N (int) - Number of columns in the matrix
    • ctx (Context, optional) - Device context (default is the current default context).
    • dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is float32.
  • csr_matrix((data, indices, indptr))
    to construct a CSRNDArray based on the definition of compressed sparse row format using three separate arrays, where the column indices for row i are stored in indices[indptr[i]:indptr[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]]. The column indices for a given row are expected to be sorted in ascending order. Duplicate column entries for the same row are not allowed.
    • data (array_like) - An object exposing the array interface, which holds all the non-zero entries of the matrix in row-major order.
    • indices (array_like) - An object exposing the array interface, which stores the column index for each non-zero element in data.
    • indptr (array_like) - An object exposing the array interface, which stores the offset into data of the first non-zero element number of each row of the matrix.
    • shape (tuple of int, optional) - The shape of the array. The default shape is inferred from the indices and indptr arrays.
    • ctx (Context, optional) - Device context (default is the current default context).
    • dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is data.dtype if data is an NDArray or numpy.ndarray, float32 otherwise.
  • csr_matrix((data, (row, col)))
    to construct a CSRNDArray based on the COOrdinate format using three seperate arrays, where row[i] is the row index of the element, col[i] is the column index of the element and data[i] is the data corresponding to the element. All the missing elements in the input are taken to be zeroes.
    • data (array_like) - An object exposing the array interface, which holds all the non-zero entries of the matrix in COO format.
    • row (array_like) - An object exposing the array interface, which stores the row index for each non zero element in data.
    • col (array_like) - An object exposing the array interface, which stores the col index for each non zero element in data.
    • shape (tuple of int, optional) - The shape of the array. The default shape is inferred from the row and col arrays.
    • ctx (Context, optional) - Device context (default is the current default context).
    • dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is float32.
Parameters:
  • arg1 (tuple of int, tuple of array_like, array_like, CSRNDArray, scipy.sparse.csr_matrix, scipy.sparse.coo_matrix, tuple of int or tuple of array_like) – The argument to help instantiate the csr matrix. See above for further details.
  • shape (tuple of int, optional) – The shape of the csr matrix.
  • ctx (Context, optional) – Device context (default is the current default context).
  • dtype (str or numpy.dtype, optional) – The data type of the output array.
Returns:

A CSRNDArray with the csr storage representation.

Return type:

CSRNDArray

Example

>>> a = mx.nd.sparse.csr_matrix(([1, 2, 3], [1, 0, 2], [0, 1, 2, 2, 3]), shape=(4, 3))
>>> a.asnumpy()
array([[ 0.,  1.,  0.],
       [ 2.,  0.,  0.],
       [ 0.,  0.,  0.],
       [ 0.,  0.,  3.]], dtype=float32)

See also

CSRNDArray()
MXNet NDArray in compressed sparse row format.
mxnet.ndarray.sparse.row_sparse_array(arg1, shape=None, ctx=None, dtype=None)[source]

Creates a RowSparseNDArray, a multidimensional row sparse array with a set of tensor slices at given indices.

The RowSparseNDArray can be instantiated in several ways:

  • row_sparse_array(D):
    to construct a RowSparseNDArray with a dense ndarray D - D (array_like) - An object exposing the array interface, an object whose __array__ method returns an array, or any (nested) sequence. - ctx (Context, optional) - Device context (default is the current default context). - dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is D.dtype if D is an NDArray or numpy.ndarray, float32 otherwise.
  • row_sparse_array(S)
    to construct a RowSparseNDArray with a sparse ndarray S - S (RowSparseNDArray) - A sparse ndarray. - ctx (Context, optional) - Device context (default is the current default context). - dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is S.dtype.
  • row_sparse_array((D0, D1 .. Dn))
    to construct an empty RowSparseNDArray with shape (D0, D1, ... Dn) - D0, D1 .. Dn (int) - The shape of the ndarray - ctx (Context, optional) - Device context (default is the current default context). - dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is float32.
  • row_sparse_array((data, indices))
    to construct a RowSparseNDArray based on the definition of row sparse format using two separate arrays, where the indices stores the indices of the row slices with non-zeros, while the values are stored in data. The corresponding NDArray dense represented by RowSparseNDArray rsp has dense[rsp.indices[i], :, :, :, ...] = rsp.data[i, :, :, :, ...] The row indices for are expected to be sorted in ascending order. - data (array_like) - An object exposing the array interface, which holds all the non-zero row slices of the array. - indices (array_like) - An object exposing the array interface, which stores the row index for each row slice with non-zero elements. - shape (tuple of int, optional) - The shape of the array. The default shape is inferred from the indices and indptr arrays. - ctx (Context, optional) - Device context (default is the current default context). - dtype (str or numpy.dtype, optional) - The data type of the output array. The default dtype is float32.
Parameters:
  • arg1 (NDArray, numpy.ndarray, RowSparseNDArray, tuple of int or tuple of array_like) – The argument to help instantiate the row sparse ndarray. See above for further details.
  • shape (tuple of int, optional) – The shape of the row sparse ndarray. (Default value = None)
  • ctx (Context, optional) – Device context (default is the current default context).
  • dtype (str or numpy.dtype, optional) – The data type of the output array. (Default value = None)
Returns:

An RowSparseNDArray with the row_sparse storage representation.

Return type:

RowSparseNDArray

Examples

>>> a = mx.nd.sparse.row_sparse_array(([[1, 2], [3, 4]], [1, 4]), shape=(6, 2))
>>> a.asnumpy()
array([[ 0.,  0.],
       [ 1.,  2.],
       [ 0.,  0.],
       [ 0.,  0.],
       [ 3.,  4.],
       [ 0.,  0.]], dtype=float32)

See also

RowSparseNDArray()
MXNet NDArray in row sparse format.
mxnet.ndarray.sparse.add(lhs, rhs)[source]

Returns element-wise sum of the input arrays with broadcasting.

Equivalent to lhs + rhs, mx.nd.broadcast_add(lhs, rhs) and mx.nd.broadcast_plus(lhs, rhs) when shapes of lhs and rhs do not match. If lhs.shape == rhs.shape, this is equivalent to mx.nd.elemwise_add(lhs, rhs)

Note

If the corresponding dimensions of two arrays have the same size or one of them has size 1, then the arrays are broadcastable to a common shape.abs

Parameters:
  • lhs (scalar or mxnet.ndarray.sparse.array) – First array to be added.
  • rhs (scalar or mxnet.ndarray.sparse.array) – Second array to be added. If lhs.shape != rhs.shape, they must be broadcastable to a common shape.
Returns:

The element-wise sum of the input arrays.

Return type:

NDArray

Examples

>>> a = mx.nd.ones((2,3)).tostype('csr')
>>> b = mx.nd.ones((2,3)).tostype('csr')
>>> a.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> b.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> (a+b).asnumpy()
array([[ 2.,  2.,  2.],
       [ 2.,  2.,  2.]], dtype=float32)
>>> c = mx.nd.ones((2,3)).tostype('row_sparse')
>>> d = mx.nd.ones((2,3)).tostype('row_sparse')
>>> c.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> d.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> (c+d).asnumpy()
array([[ 2.,  2.,  2.],
       [ 2.,  2.,  2.]], dtype=float32)
mxnet.ndarray.sparse.subtract(lhs, rhs)[source]

Returns element-wise difference of the input arrays with broadcasting.

Equivalent to lhs - rhs, mx.nd.broadcast_sub(lhs, rhs) and mx.nd.broadcast_minus(lhs, rhs) when shapes of lhs and rhs do not match. If lhs.shape == rhs.shape, this is equivalent to mx.nd.elemwise_sub(lhs, rhs)

Note

If the corresponding dimensions of two arrays have the same size or one of them has size 1, then the arrays are broadcastable to a common shape.

Parameters:
  • lhs (scalar or mxnet.ndarray.sparse.array) – First array to be subtracted.
  • rhs (scalar or mxnet.ndarray.sparse.array) – Second array to be subtracted. If lhs.shape != rhs.shape, they must be broadcastable to a common shape.__spec__
Returns:

The element-wise difference of the input arrays.

Return type:

NDArray

Examples

>>> a = mx.nd.ones((2,3)).tostype('csr')
>>> b = mx.nd.ones((2,3)).tostype('csr')
>>> a.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> b.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> (a-b).asnumpy()
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)
>>> c = mx.nd.ones((2,3)).tostype('row_sparse')
>>> d = mx.nd.ones((2,3)).tostype('row_sparse')
>>> c.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> d.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> (c-d).asnumpy()
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)
mxnet.ndarray.sparse.multiply(lhs, rhs)[source]

Returns element-wise product of the input arrays with broadcasting.

Equivalent to lhs * rhs and mx.nd.broadcast_mul(lhs, rhs) when shapes of lhs and rhs do not match. If lhs.shape == rhs.shape, this is equivalent to mx.nd.elemwise_mul(lhs, rhs)

Note

If the corresponding dimensions of two arrays have the same size or one of them has size 1, then the arrays are broadcastable to a common shape.

Parameters:
  • lhs (scalar or mxnet.ndarray.sparse.array) – First array to be multiplied.
  • rhs (scalar or mxnet.ndarray.sparse.array) – Second array to be multiplied. If lhs.shape != rhs.shape, they must be broadcastable to a common shape.
Returns:

The element-wise multiplication of the input arrays.

Return type:

NDArray

Examples

>>> x = mx.nd.ones((2,3)).tostype('csr')
>>> y = mx.nd.arange(2).reshape((2,1))
>>> z = mx.nd.arange(3)
>>> x.asnumpy()
array([[ 1.,  1.,  1.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> y.asnumpy()
array([[ 0.],
       [ 1.]], dtype=float32)
>>> z.asnumpy()
array([ 0.,  1.,  2.], dtype=float32)
>>> (x*2).asnumpy()
array([[ 2.,  2.,  2.],
       [ 2.,  2.,  2.]], dtype=float32)
>>> (x*y).asnumpy()
array([[ 0.,  0.,  0.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> mx.nd.sparse.multiply(x, y).asnumpy()
array([[ 0.,  0.,  0.],
       [ 1.,  1.,  1.]], dtype=float32)
>>> (x*z).asnumpy()
array([[ 0.,  1.,  2.],
       [ 0.,  1.,  2.]], dtype=float32)
>>> mx.nd.sparse.multiply(x, z).asnumpy()
array([[ 0.,  1.,  2.],
       [ 0.,  1.,  2.]], dtype=float32)
>>> z = z.reshape((1, 3))
>>> z.asnumpy()
array([[ 0.,  1.,  2.]], dtype=float32)
>>> (x*z).asnumpy()
array([[ 0.,  1.,  2.],
       [ 0.,  1.,  2.]], dtype=float32)
>>> mx.nd.sparse.multiply(x, z).asnumpy()
array([[ 0.,  1.,  2.],
       [ 0.,  1.,  2.]], dtype=float32)
mxnet.ndarray.sparse.ElementWiseSum(*args, **kwargs)

Adds all input arguments element-wise.

\[add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n\]

add_n is potentially more efficient than calling add by n times.

The storage type of add_n output depends on storage types of inputs

  • add_n(row_sparse, row_sparse, ..) = row_sparse
  • add_n(default, csr, default) = default
  • add_n(any input combinations longer than 4 (>4) with at least one default type) = default
  • otherwise, add_n falls all inputs back to default storage and generates default storage

Defined in src/operator/tensor/elemwise_sum.cc:L156

Parameters:
  • args (NDArray[]) – Positional input arguments
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.Embedding(data=None, weight=None, input_dim=_Null, output_dim=_Null, dtype=_Null, sparse_grad=_Null, out=None, name=None, **kwargs)

Maps integer indices to vector representations (embeddings).

This operator maps words to real-valued vectors in a high-dimensional space, called word embeddings. These embeddings can capture semantic and syntactic properties of the words. For example, it has been noted that in the learned embedding spaces, similar words tend to be close to each other and dissimilar words far apart.

For an input array of shape (d1, ..., dK), the shape of an output array is (d1, ..., dK, output_dim). All the input values should be integers in the range [0, input_dim).

If the input_dim is ip0 and output_dim is op0, then shape of the embedding weight matrix must be (ip0, op0).

By default, if any index mentioned is too large, it is replaced by the index that addresses the last vector in an embedding matrix.

Examples:

input_dim = 4
output_dim = 5

// Each row in weight matrix y represents a word. So, y = (w0,w1,w2,w3)
y = [[  0.,   1.,   2.,   3.,   4.],
     [  5.,   6.,   7.,   8.,   9.],
     [ 10.,  11.,  12.,  13.,  14.],
     [ 15.,  16.,  17.,  18.,  19.]]

// Input array x represents n-grams(2-gram). So, x = [(w1,w3), (w0,w2)]
x = [[ 1.,  3.],
     [ 0.,  2.]]

// Mapped input x to its vector representation y.
Embedding(x, y, 4, 5) = [[[  5.,   6.,   7.,   8.,   9.],
                          [ 15.,  16.,  17.,  18.,  19.]],

                         [[  0.,   1.,   2.,   3.,   4.],
                          [ 10.,  11.,  12.,  13.,  14.]]]

The storage type of weight can be either row_sparse or default.

Note

If “sparse_grad” is set to True, the storage type of gradient w.r.t weights will be “row_sparse”. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: /api/python/optimization/optimization.html

Defined in src/operator/tensor/indexing_op.cc:L519

Parameters:
  • data (NDArray) – The input array to the embedding operator.
  • weight (NDArray) – The embedding weight matrix.
  • input_dim (int, required) – Vocabulary size of the input indices.
  • output_dim (int, required) – Dimension of the embedding vectors.
  • dtype ({'float16', 'float32', 'float64', 'int32', 'int64', 'int8', 'uint8'},optional, default='float32') – Data type of weight.
  • sparse_grad (boolean, optional, default=0) – Compute row sparse gradient in the backward calculation. If set to True, the grad’s storage type is row_sparse.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.FullyConnected(data=None, weight=None, bias=None, num_hidden=_Null, no_bias=_Null, flatten=_Null, out=None, name=None, **kwargs)

Applies a linear transformation: \(Y = XW^T + b\).

If flatten is set to be true, then the shapes are:

  • data: (batch_size, x1, x2, ..., xn)
  • weight: (num_hidden, x1 * x2 * ... * xn)
  • bias: (num_hidden,)
  • out: (batch_size, num_hidden)

If flatten is set to be false, then the shapes are:

  • data: (x1, x2, ..., xn, input_dim)
  • weight: (num_hidden, input_dim)
  • bias: (num_hidden,)
  • out: (x1, x2, ..., xn, num_hidden)

The learnable parameters include both weight and bias.

If no_bias is set to be true, then the bias term is ignored.

Note

The sparse support for FullyConnected is limited to forward evaluation with row_sparse weight and bias, where the length of weight.indices and bias.indices must be equal to num_hidden. This could be useful for model inference with row_sparse weights trained with importance sampling or noise contrastive estimation.

To compute linear transformation with ‘csr’ sparse data, sparse.dot is recommended instead of sparse.FullyConnected.

Defined in src/operator/nn/fully_connected.cc:L271

Parameters:
  • data (NDArray) – Input data.
  • weight (NDArray) – Weight matrix.
  • bias (NDArray) – Bias parameter.
  • num_hidden (int, required) – Number of hidden nodes of the output.
  • no_bias (boolean, optional, default=0) – Whether to disable bias parameter.
  • flatten (boolean, optional, default=1) – Whether to collapse all but the first axis of the input data tensor.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.LinearRegressionOutput(data=None, label=None, grad_scale=_Null, out=None, name=None, **kwargs)

Computes and optimizes for squared loss during backward propagation. Just outputs data during forward propagation.

If \(\hat{y}_i\) is the predicted value of the i-th sample, and \(y_i\) is the corresponding target value, then the squared loss estimated over \(n\) samples is defined as

\(\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2\)

Note

Use the LinearRegressionOutput as the final output layer of a net.

The storage type of label can be default or csr

  • LinearRegressionOutput(default, default) = default
  • LinearRegressionOutput(default, csr) = default

By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.

Defined in src/operator/regression_output.cc:L92

Parameters:
  • data (NDArray) – Input data to the function.
  • label (NDArray) – Input label to the function.
  • grad_scale (float, optional, default=1) – Scale the gradient by a float factor
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.LogisticRegressionOutput(data=None, label=None, grad_scale=_Null, out=None, name=None, **kwargs)

Applies a logistic function to the input.

The logistic function, also known as the sigmoid function, is computed as \(\frac{1}{1+exp(-\textbf{x})}\).

Commonly, the sigmoid is used to squash the real-valued output of a linear model \(wTx+b\) into the [0,1] range so that it can be interpreted as a probability. It is suitable for binary classification or probability prediction tasks.

Note

Use the LogisticRegressionOutput as the final output layer of a net.

The storage type of label can be default or csr

  • LogisticRegressionOutput(default, default) = default
  • LogisticRegressionOutput(default, csr) = default

The loss function used is the Binary Cross Entropy Loss:

\(-{(y\log(p) + (1 - y)\log(1 - p))}\)

Where y is the ground truth probability of positive outcome for a given example, and p the probability predicted by the model. By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.

Defined in src/operator/regression_output.cc:L152

Parameters:
  • data (NDArray) – Input data to the function.
  • label (NDArray) – Input label to the function.
  • grad_scale (float, optional, default=1) – Scale the gradient by a float factor
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.MAERegressionOutput(data=None, label=None, grad_scale=_Null, out=None, name=None, **kwargs)

Computes mean absolute error of the input.

MAE is a risk metric corresponding to the expected value of the absolute error.

If \(\hat{y}_i\) is the predicted value of the i-th sample, and \(y_i\) is the corresponding target value, then the mean absolute error (MAE) estimated over \(n\) samples is defined as

\(\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1\)

Note

Use the MAERegressionOutput as the final output layer of a net.

The storage type of label can be default or csr

  • MAERegressionOutput(default, default) = default
  • MAERegressionOutput(default, csr) = default

By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example. The parameter grad_scale can be used to change this scale to grad_scale/m.

Defined in src/operator/regression_output.cc:L120

Parameters:
  • data (NDArray) – Input data to the function.
  • label (NDArray) – Input label to the function.
  • grad_scale (float, optional, default=1) – Scale the gradient by a float factor
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.abs(data=None, out=None, name=None, **kwargs)

Returns element-wise absolute value of the input.

Example:

abs([-2, 0, 3]) = [2, 0, 3]

The storage type of abs output depends upon the input storage type:

  • abs(default) = default
  • abs(row_sparse) = row_sparse
  • abs(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L662

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.adagrad_update(weight=None, grad=None, history=None, lr=_Null, epsilon=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, out=None, name=None, **kwargs)

Update function for AdaGrad optimizer.

Referenced from Adaptive Subgradient Methods for Online Learning and Stochastic Optimization, and available at http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf.

Updates are applied by:

rescaled_grad = clip(grad * rescale_grad, clip_gradient)
history = history + square(rescaled_grad)
w = w - learning_rate * rescaled_grad / sqrt(history + epsilon)

Note that non-zero values for the weight decay option are not supported.

Defined in src/operator/optimizer_op.cc:L665

Parameters:
  • weight (NDArray) – Weight
  • grad (NDArray) – Gradient
  • history (NDArray) – History
  • lr (float, required) – Learning rate
  • epsilon (float, optional, default=1e-07) – epsilon
  • wd (float, optional, default=0) – weight decay
  • rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
  • clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.adam_update(weight=None, grad=None, mean=None, var=None, lr=_Null, beta1=_Null, beta2=_Null, epsilon=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, out=None, name=None, **kwargs)

Update function for Adam optimizer. Adam is seen as a generalization of AdaGrad.

Adam update consists of the following steps, where g represents gradient and m, v are 1st and 2nd order moment estimates (mean and variance).

\[\begin{split}g_t = \nabla J(W_{t-1})\\ m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t\\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\ W_t = W_{t-1} - \alpha \frac{ m_t }{ \sqrt{ v_t } + \epsilon }\end{split}\]

It updates the weights using:

m = beta1*m + (1-beta1)*grad
v = beta2*v + (1-beta2)*(grad**2)
w += - learning_rate * m / (sqrt(v) + epsilon)

However, if grad’s storage type is row_sparse, lazy_update is True and the storage type of weight is the same as those of m and v, only the row slices whose indices appear in grad.indices are updated (for w, m and v):

for row in grad.indices:
    m[row] = beta1*m[row] + (1-beta1)*grad[row]
    v[row] = beta2*v[row] + (1-beta2)*(grad[row]**2)
    w[row] += - learning_rate * m[row] / (sqrt(v[row]) + epsilon)

Defined in src/operator/optimizer_op.cc:L495

Parameters:
  • weight (NDArray) – Weight
  • grad (NDArray) – Gradient
  • mean (NDArray) – Moving mean
  • var (NDArray) – Moving variance
  • lr (float, required) – Learning rate
  • beta1 (float, optional, default=0.9) – The decay rate for the 1st moment estimates.
  • beta2 (float, optional, default=0.999) – The decay rate for the 2nd moment estimates.
  • epsilon (float, optional, default=1e-08) – A small constant for numerical stability.
  • wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
  • rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
  • clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
  • lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse and all of w, m and v have the same stype
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.add_n(*args, **kwargs)

Adds all input arguments element-wise.

\[add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n\]

add_n is potentially more efficient than calling add by n times.

The storage type of add_n output depends on storage types of inputs

  • add_n(row_sparse, row_sparse, ..) = row_sparse
  • add_n(default, csr, default) = default
  • add_n(any input combinations longer than 4 (>4) with at least one default type) = default
  • otherwise, add_n falls all inputs back to default storage and generates default storage

Defined in src/operator/tensor/elemwise_sum.cc:L156

Parameters:
  • args (NDArray[]) – Positional input arguments
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.arccos(data=None, out=None, name=None, **kwargs)

Returns element-wise inverse cosine of the input array.

The input should be in range [-1, 1]. The output is in the closed interval \([0, \pi]\)

\[arccos([-1, -.707, 0, .707, 1]) = [\pi, 3\pi/4, \pi/2, \pi/4, 0]\]

The storage type of arccos output is always dense

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L123

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.arccosh(data=None, out=None, name=None, **kwargs)

Returns the element-wise inverse hyperbolic cosine of the input array, computed element-wise.

The storage type of arccosh output is always dense

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L264

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.arcsin(data=None, out=None, name=None, **kwargs)

Returns element-wise inverse sine of the input array.

The input should be in the range [-1, 1]. The output is in the closed interval of [\(-\pi/2\), \(\pi/2\)].

\[arcsin([-1, -.707, 0, .707, 1]) = [-\pi/2, -\pi/4, 0, \pi/4, \pi/2]\]

The storage type of arcsin output depends upon the input storage type:

  • arcsin(default) = default
  • arcsin(row_sparse) = row_sparse
  • arcsin(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L104

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.arcsinh(data=None, out=None, name=None, **kwargs)

Returns the element-wise inverse hyperbolic sine of the input array, computed element-wise.

The storage type of arcsinh output depends upon the input storage type:

  • arcsinh(default) = default
  • arcsinh(row_sparse) = row_sparse
  • arcsinh(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L250

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.arctan(data=None, out=None, name=None, **kwargs)

Returns element-wise inverse tangent of the input array.

The output is in the closed interval \([-\pi/2, \pi/2]\)

\[arctan([-1, 0, 1]) = [-\pi/4, 0, \pi/4]\]

The storage type of arctan output depends upon the input storage type:

  • arctan(default) = default
  • arctan(row_sparse) = row_sparse
  • arctan(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L144

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.arctanh(data=None, out=None, name=None, **kwargs)

Returns the element-wise inverse hyperbolic tangent of the input array, computed element-wise.

The storage type of arctanh output depends upon the input storage type:

  • arctanh(default) = default
  • arctanh(row_sparse) = row_sparse
  • arctanh(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L281

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.broadcast_add(lhs=None, rhs=None, out=None, name=None, **kwargs)

Returns element-wise sum of the input arrays with broadcasting.

broadcast_plus is an alias to the function broadcast_add.

Example:

x = [[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]]

y = [[ 0.],
     [ 1.]]

broadcast_add(x, y) = [[ 1.,  1.,  1.],
                       [ 2.,  2.,  2.]]

broadcast_plus(x, y) = [[ 1.,  1.,  1.],
                        [ 2.,  2.,  2.]]

Supported sparse operations:

broadcast_add(csr, dense(1D)) = dense broadcast_add(dense(1D), csr) = dense

Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

Parameters:
  • lhs (NDArray) – First input to the function
  • rhs (NDArray) – Second input to the function
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.broadcast_div(lhs=None, rhs=None, out=None, name=None, **kwargs)

Returns element-wise division of the input arrays with broadcasting.

Example:

x = [[ 6.,  6.,  6.],
     [ 6.,  6.,  6.]]

y = [[ 2.],
     [ 3.]]

broadcast_div(x, y) = [[ 3.,  3.,  3.],
                       [ 2.,  2.,  2.]]

Supported sparse operations:

broadcast_div(csr, dense(1D)) = csr

Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L187

Parameters:
  • lhs (NDArray) – First input to the function
  • rhs (NDArray) – Second input to the function
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.broadcast_minus(lhs=None, rhs=None, out=None, name=None, **kwargs)

Returns element-wise difference of the input arrays with broadcasting.

broadcast_minus is an alias to the function broadcast_sub.

Example:

x = [[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]]

y = [[ 0.],
     [ 1.]]

broadcast_sub(x, y) = [[ 1.,  1.,  1.],
                       [ 0.,  0.,  0.]]

broadcast_minus(x, y) = [[ 1.,  1.,  1.],
                         [ 0.,  0.,  0.]]

Supported sparse operations:

broadcast_sub/minus(csr, dense(1D)) = dense broadcast_sub/minus(dense(1D), csr) = dense

Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

Parameters:
  • lhs (NDArray) – First input to the function
  • rhs (NDArray) – Second input to the function
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.broadcast_mul(lhs=None, rhs=None, out=None, name=None, **kwargs)

Returns element-wise product of the input arrays with broadcasting.

Example:

x = [[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]]

y = [[ 0.],
     [ 1.]]

broadcast_mul(x, y) = [[ 0.,  0.,  0.],
                       [ 1.,  1.,  1.]]

Supported sparse operations:

broadcast_mul(csr, dense(1D)) = csr

Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L146

Parameters:
  • lhs (NDArray) – First input to the function
  • rhs (NDArray) – Second input to the function
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.broadcast_plus(lhs=None, rhs=None, out=None, name=None, **kwargs)

Returns element-wise sum of the input arrays with broadcasting.

broadcast_plus is an alias to the function broadcast_add.

Example:

x = [[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]]

y = [[ 0.],
     [ 1.]]

broadcast_add(x, y) = [[ 1.,  1.,  1.],
                       [ 2.,  2.,  2.]]

broadcast_plus(x, y) = [[ 1.,  1.,  1.],
                        [ 2.,  2.,  2.]]

Supported sparse operations:

broadcast_add(csr, dense(1D)) = dense broadcast_add(dense(1D), csr) = dense

Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

Parameters:
  • lhs (NDArray) – First input to the function
  • rhs (NDArray) – Second input to the function
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.broadcast_sub(lhs=None, rhs=None, out=None, name=None, **kwargs)

Returns element-wise difference of the input arrays with broadcasting.

broadcast_minus is an alias to the function broadcast_sub.

Example:

x = [[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]]

y = [[ 0.],
     [ 1.]]

broadcast_sub(x, y) = [[ 1.,  1.,  1.],
                       [ 0.,  0.,  0.]]

broadcast_minus(x, y) = [[ 1.,  1.,  1.],
                         [ 0.,  0.,  0.]]

Supported sparse operations:

broadcast_sub/minus(csr, dense(1D)) = dense broadcast_sub/minus(dense(1D), csr) = dense

Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

Parameters:
  • lhs (NDArray) – First input to the function
  • rhs (NDArray) – Second input to the function
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.cast_storage(data=None, stype=_Null, out=None, name=None, **kwargs)

Casts tensor storage type to the new type.

When an NDArray with default storage type is cast to csr or row_sparse storage, the result is compact, which means:

  • for csr, zero values will not be retained
  • for row_sparse, row slices of all zeros will not be retained

The storage type of cast_storage output depends on stype parameter:

  • cast_storage(csr, ‘default’) = default
  • cast_storage(row_sparse, ‘default’) = default
  • cast_storage(default, ‘csr’) = csr
  • cast_storage(default, ‘row_sparse’) = row_sparse
  • cast_storage(csr, ‘csr’) = csr
  • cast_storage(row_sparse, ‘row_sparse’) = row_sparse

Example:

dense = [[ 0.,  1.,  0.],
         [ 2.,  0.,  3.],
         [ 0.,  0.,  0.],
         [ 0.,  0.,  0.]]

# cast to row_sparse storage type
rsp = cast_storage(dense, 'row_sparse')
rsp.indices = [0, 1]
rsp.values = [[ 0.,  1.,  0.],
              [ 2.,  0.,  3.]]

# cast to csr storage type
csr = cast_storage(dense, 'csr')
csr.indices = [1, 0, 2]
csr.values = [ 1.,  2.,  3.]
csr.indptr = [0, 1, 3, 3, 3]

Defined in src/operator/tensor/cast_storage.cc:L71

Parameters:
  • data (NDArray) – The input.
  • stype ({'csr', 'default', 'row_sparse'}, required) – Output storage type.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.cbrt(data=None, out=None, name=None, **kwargs)

Returns element-wise cube-root value of the input.

\[cbrt(x) = \sqrt[3]{x}\]

Example:

cbrt([1, 8, -125]) = [1, 2, -5]

The storage type of cbrt output depends upon the input storage type:

  • cbrt(default) = default
  • cbrt(row_sparse) = row_sparse
  • cbrt(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L883

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.ceil(data=None, out=None, name=None, **kwargs)

Returns element-wise ceiling of the input.

The ceil of the scalar x is the smallest integer i, such that i >= x.

Example:

ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1.,  2.,  2.,  3.]

The storage type of ceil output depends upon the input storage type:

  • ceil(default) = default
  • ceil(row_sparse) = row_sparse
  • ceil(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L740

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.clip(data=None, a_min=_Null, a_max=_Null, out=None, name=None, **kwargs)

Clips (limits) the values in an array.

Given an interval, values outside the interval are clipped to the interval edges. Clipping x between a_min and a_x would be:

clip(x, a_min, a_max) = max(min(x, a_max), a_min))

Example:

x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

clip(x,1,8) = [ 1.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  8.]

The storage type of clip output depends on storage types of inputs and the a_min, a_max parameter values:

  • clip(default) = default
  • clip(row_sparse, a_min <= 0, a_max >= 0) = row_sparse
  • clip(csr, a_min <= 0, a_max >= 0) = csr
  • clip(row_sparse, a_min < 0, a_max < 0) = default
  • clip(row_sparse, a_min > 0, a_max > 0) = default
  • clip(csr, a_min < 0, a_max < 0) = csr
  • clip(csr, a_min > 0, a_max > 0) = csr

Defined in src/operator/tensor/matrix_op.cc:L619

Parameters:
  • data (NDArray) – Input array.
  • a_min (float, required) – Minimum value
  • a_max (float, required) – Maximum value
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.concat(*data, **kwargs)

Joins input arrays along a given axis.

Note

Concat is deprecated. Use concat instead.

The dimensions of the input arrays should be the same except the axis along which they will be concatenated. The dimension of the output array along the concatenated axis will be equal to the sum of the corresponding dimensions of the input arrays.

The storage type of concat output depends on storage types of inputs

  • concat(csr, csr, ..., csr, dim=0) = csr
  • otherwise, concat generates output with default storage

Example:

x = [[1,1],[2,2]]
y = [[3,3],[4,4],[5,5]]
z = [[6,6], [7,7],[8,8]]

concat(x,y,z,dim=0) = [[ 1.,  1.],
                       [ 2.,  2.],
                       [ 3.,  3.],
                       [ 4.,  4.],
                       [ 5.,  5.],
                       [ 6.,  6.],
                       [ 7.,  7.],
                       [ 8.,  8.]]

Note that you cannot concat x,y,z along dimension 1 since dimension
0 is not the same for all the input arrays.

concat(y,z,dim=1) = [[ 3.,  3.,  6.,  6.],
                      [ 4.,  4.,  7.,  7.],
                      [ 5.,  5.,  8.,  8.]]

Defined in src/operator/nn/concat.cc:L368

Parameters:
  • data (NDArray[]) – List of arrays to concatenate
  • dim (int, optional, default='1') – the dimension to be concated.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.cos(data=None, out=None, name=None, **kwargs)

Computes the element-wise cosine of the input array.

The input should be in radians (\(2\pi\) rad equals 360 degrees).

\[cos([0, \pi/4, \pi/2]) = [1, 0.707, 0]\]

The storage type of cos output is always dense

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L63

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.cosh(data=None, out=None, name=None, **kwargs)

Returns the hyperbolic cosine of the input array, computed element-wise.

\[cosh(x) = 0.5\times(exp(x) + exp(-x))\]

The storage type of cosh output is always dense

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L216

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.degrees(data=None, out=None, name=None, **kwargs)

Converts each element of the input array from radians to degrees.

\[degrees([0, \pi/2, \pi, 3\pi/2, 2\pi]) = [0, 90, 180, 270, 360]\]

The storage type of degrees output depends upon the input storage type:

  • degrees(default) = default
  • degrees(row_sparse) = row_sparse
  • degrees(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L163

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.dot(lhs=None, rhs=None, transpose_a=_Null, transpose_b=_Null, forward_stype=_Null, out=None, name=None, **kwargs)

Dot product of two arrays.

dot‘s behavior depends on the input array dimensions:

  • 1-D arrays: inner product of vectors

  • 2-D arrays: matrix multiplication

  • N-D arrays: a sum product over the last axis of the first input and the first axis of the second input

    For example, given 3-D x with shape (n,m,k) and y with shape (k,r,s), the result array will have shape (n,m,r,s). It is computed by:

    dot(x,y)[i,j,a,b] = sum(x[i,j,:]*y[:,a,b])
    

    Example:

    x = reshape([0,1,2,3,4,5,6,7], shape=(2,2,2))
    y = reshape([7,6,5,4,3,2,1,0], shape=(2,2,2))
    dot(x,y)[0,0,1,1] = 0
    sum(x[0,0,:]*y[:,1,1]) = 0
    

The storage type of dot output depends on storage types of inputs, transpose option and forward_stype option for output storage type. Implemented sparse operations include:

  • dot(default, default, transpose_a=True/False, transpose_b=True/False) = default
  • dot(csr, default, transpose_a=True) = default
  • dot(csr, default, transpose_a=True) = row_sparse
  • dot(csr, default) = default
  • dot(csr, row_sparse) = default
  • dot(default, csr) = csr (CPU only)
  • dot(default, csr, forward_stype=’default’) = default
  • dot(default, csr, transpose_b=True, forward_stype=’default’) = default

If the combination of input storage types and forward_stype does not match any of the above patterns, dot will fallback and generate output with default storage.

Note

If the storage type of the lhs is “csr”, the storage type of gradient w.r.t rhs will be “row_sparse”. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: /api/python/optimization/optimization.html

Defined in src/operator/tensor/dot.cc:L77

Parameters:
  • lhs (NDArray) – The first input
  • rhs (NDArray) – The second input
  • transpose_a (boolean, optional, default=0) – If true then transpose the first input before dot.
  • transpose_b (boolean, optional, default=0) – If true then transpose the second input before dot.
  • forward_stype ({None, 'csr', 'default', 'row_sparse'},optional, default='None') – The desired storage type of the forward output given by user, if thecombination of input storage types and this hint does not matchany implemented ones, the dot operator will perform fallback operationand still produce an output of the desired storage type.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.elemwise_add(lhs=None, rhs=None, out=None, name=None, **kwargs)

Adds arguments element-wise.

The storage type of elemwise_add output depends on storage types of inputs

  • elemwise_add(row_sparse, row_sparse) = row_sparse
  • elemwise_add(csr, csr) = csr
  • elemwise_add(default, csr) = default
  • elemwise_add(csr, default) = default
  • elemwise_add(default, rsp) = default
  • elemwise_add(rsp, default) = default
  • otherwise, elemwise_add generates output with default storage
Parameters:
  • lhs (NDArray) – first input
  • rhs (NDArray) – second input
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.elemwise_div(lhs=None, rhs=None, out=None, name=None, **kwargs)

Divides arguments element-wise.

The storage type of elemwise_div output is always dense

Parameters:
  • lhs (NDArray) – first input
  • rhs (NDArray) – second input
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.elemwise_mul(lhs=None, rhs=None, out=None, name=None, **kwargs)

Multiplies arguments element-wise.

The storage type of elemwise_mul output depends on storage types of inputs

  • elemwise_mul(default, default) = default
  • elemwise_mul(row_sparse, row_sparse) = row_sparse
  • elemwise_mul(default, row_sparse) = row_sparse
  • elemwise_mul(row_sparse, default) = row_sparse
  • elemwise_mul(csr, csr) = csr
  • otherwise, elemwise_mul generates output with default storage
Parameters:
  • lhs (NDArray) – first input
  • rhs (NDArray) – second input
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.elemwise_sub(lhs=None, rhs=None, out=None, name=None, **kwargs)

Subtracts arguments element-wise.

The storage type of elemwise_sub output depends on storage types of inputs

  • elemwise_sub(row_sparse, row_sparse) = row_sparse
  • elemwise_sub(csr, csr) = csr
  • elemwise_sub(default, csr) = default
  • elemwise_sub(csr, default) = default
  • elemwise_sub(default, rsp) = default
  • elemwise_sub(rsp, default) = default
  • otherwise, elemwise_sub generates output with default storage
Parameters:
  • lhs (NDArray) – first input
  • rhs (NDArray) – second input
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.exp(data=None, out=None, name=None, **kwargs)

Returns element-wise exponential value of the input.

\[exp(x) = e^x \approx 2.718^x\]

Example:

exp([0, 1, 2]) = [1., 2.71828175, 7.38905621]

The storage type of exp output is always dense

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L939

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.expm1(data=None, out=None, name=None, **kwargs)

Returns exp(x) - 1 computed element-wise on the input.

This function provides greater precision than exp(x) - 1 for small values of x.

The storage type of expm1 output depends upon the input storage type:

  • expm1(default) = default
  • expm1(row_sparse) = row_sparse
  • expm1(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1018

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.fix(data=None, out=None, name=None, **kwargs)

Returns element-wise rounded value to the nearest integer towards zero of the input.

Example:

fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1.,  1., 2.]

The storage type of fix output depends upon the input storage type:

  • fix(default) = default
  • fix(row_sparse) = row_sparse
  • fix(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L797

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.floor(data=None, out=None, name=None, **kwargs)

Returns element-wise floor of the input.

The floor of the scalar x is the largest integer i, such that i <= x.

Example:

floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2.,  1.,  1.,  2.]

The storage type of floor output depends upon the input storage type:

  • floor(default) = default
  • floor(row_sparse) = row_sparse
  • floor(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L759

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.ftrl_update(weight=None, grad=None, z=None, n=None, lr=_Null, lamda1=_Null, beta=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, out=None, name=None, **kwargs)

Update function for Ftrl optimizer. Referenced from Ad Click Prediction: a View from the Trenches, available at http://dl.acm.org/citation.cfm?id=2488200.

It updates the weights using:

rescaled_grad = clip(grad * rescale_grad, clip_gradient)
z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate
n += rescaled_grad**2
w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)

If w, z and n are all of row_sparse storage type, only the row slices whose indices appear in grad.indices are updated (for w, z and n):

for row in grad.indices:
    rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient)
    z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate
    n[row] += rescaled_grad[row]**2
    w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)

Defined in src/operator/optimizer_op.cc:L632

Parameters:
  • weight (NDArray) – Weight
  • grad (NDArray) – Gradient
  • z (NDArray) – z
  • n (NDArray) – Square of grad
  • lr (float, required) – Learning rate
  • lamda1 (float, optional, default=0.01) – The L1 regularization coefficient.
  • beta (float, optional, default=1) – Per-Coordinate Learning Rate beta.
  • wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
  • rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
  • clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.gamma(data=None, out=None, name=None, **kwargs)

Returns the gamma function (extension of the factorial function to the reals), computed element-wise on the input array.

The storage type of gamma output is always dense

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.gammaln(data=None, out=None, name=None, **kwargs)

Returns element-wise log of the absolute value of the gamma function of the input.

The storage type of gammaln output is always dense

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.log(data=None, out=None, name=None, **kwargs)

Returns element-wise Natural logarithmic value of the input.

The natural logarithm is logarithm in base e, so that log(exp(x)) = x

The storage type of log output is always dense

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L951

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.log10(data=None, out=None, name=None, **kwargs)

Returns element-wise Base-10 logarithmic value of the input.

10**log10(x) = x

The storage type of log10 output is always dense

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L963

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.log1p(data=None, out=None, name=None, **kwargs)

Returns element-wise log(1 + x) value of the input.

This function is more accurate than log(1 + x) for small x so that \(1+x\approx 1\)

The storage type of log1p output depends upon the input storage type:

  • log1p(default) = default
  • log1p(row_sparse) = row_sparse
  • log1p(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L1000

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.log2(data=None, out=None, name=None, **kwargs)

Returns element-wise Base-2 logarithmic value of the input.

2**log2(x) = x

The storage type of log2 output is always dense

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L975

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.make_loss(data=None, out=None, name=None, **kwargs)

Make your own loss function in network construction.

This operator accepts a customized loss function symbol as a terminal loss and the symbol should be an operator with no backward dependency. The output of this function is the gradient of loss with respect to the input data.

For example, if you are a making a cross entropy loss function. Assume out is the predicted output and label is the true label, then the cross entropy can be defined as:

cross_entropy = label * log(out) + (1 - label) * log(1 - out)
loss = make_loss(cross_entropy)

We will need to use make_loss when we are creating our own loss function or we want to combine multiple loss functions. Also we may want to stop some variables’ gradients from backpropagation. See more detail in BlockGrad or stop_gradient.

The storage type of make_loss output depends upon the input storage type:

  • make_loss(default) = default
  • make_loss(row_sparse) = row_sparse

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L300

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.mean(data=None, axis=_Null, keepdims=_Null, exclude=_Null, out=None, name=None, **kwargs)

Computes the mean of array elements over given axes.

Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L132

Parameters:
  • data (NDArray) – The input
  • axis (Shape or None, optional, default=None) –

    The axis or axes along which to perform the reduction.

    The default, axis=(), will compute over all elements into a scalar array with shape (1,).

    If axis is int, a reduction is performed on a particular axis.

    If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.

    If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

    Negative values means indexing from right to left.

  • keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
  • exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.negative(data=None, out=None, name=None, **kwargs)

Numerical negative of the argument, element-wise.

The storage type of negative output depends upon the input storage type:

  • negative(default) = default
  • negative(row_sparse) = row_sparse
  • negative(csr) = csr
Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.norm(data=None, ord=_Null, axis=_Null, keepdims=_Null, out=None, name=None, **kwargs)

Computes the norm on an NDArray.

This operator computes the norm on an NDArray with the specified axis, depending on the value of the ord parameter. By default, it computes the L2 norm on the entire array. Currently only ord=2 supports sparse ndarrays.

Examples:

x = [[[1, 2],
      [3, 4]],
     [[2, 2],
      [5, 6]]]

norm(x, ord=2, axis=1) = [[3.1622777 4.472136 ]
                          [5.3851647 6.3245554]]

norm(x, ord=1, axis=1) = [[4., 6.],
                          [7., 8.]]

rsp = x.cast_storage('row_sparse')

norm(rsp) = [5.47722578]

csr = x.cast_storage('csr')

norm(csr) = [5.47722578]

Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L350

Parameters:
  • data (NDArray) – The input
  • ord (int, optional, default='2') – Order of the norm. Currently ord=1 and ord=2 is supported.
  • axis (Shape or None, optional, default=None) –
    The axis or axes along which to perform the reduction.
    The default, axis=(), will compute over all elements into a scalar array with shape (1,). If axis is int, a reduction is performed on a particular axis. If axis is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed.
  • keepdims (boolean, optional, default=0) – If this is set to True, the reduced axis is left in the result as dimension with size one.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.radians(data=None, out=None, name=None, **kwargs)

Converts each element of the input array from degrees to radians.

\[radians([0, 90, 180, 270, 360]) = [0, \pi/2, \pi, 3\pi/2, 2\pi]\]

The storage type of radians output depends upon the input storage type:

  • radians(default) = default
  • radians(row_sparse) = row_sparse
  • radians(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L182

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.relu(data=None, out=None, name=None, **kwargs)

Computes rectified linear.

\[max(features, 0)\]

The storage type of relu output depends upon the input storage type:

  • relu(default) = default
  • relu(row_sparse) = row_sparse
  • relu(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L85

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.retain(data=None, indices=None, out=None, name=None, **kwargs)

pick rows specified by user input index array from a row sparse matrix and save them in the output sparse matrix.

Example:

data = [[1, 2], [3, 4], [5, 6]]
indices = [0, 1, 3]
shape = (4, 2)
rsp_in = row_sparse(data, indices)
to_retain = [0, 3]
rsp_out = retain(rsp_in, to_retain)
rsp_out.values = [[1, 2], [5, 6]]
rsp_out.indices = [0, 3]

The storage type of retain output depends on storage types of inputs

  • retain(row_sparse, default) = row_sparse
  • otherwise, retain is not supported

Defined in src/operator/tensor/sparse_retain.cc:L53

Parameters:
  • data (NDArray) – The input array for sparse_retain operator.
  • indices (NDArray) – The index array of rows ids that will be retained.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.rint(data=None, out=None, name=None, **kwargs)

Returns element-wise rounded value to the nearest integer of the input.

Note

  • For input n.5 rint returns n while round returns n+1.
  • For input -n.5 both rint and round returns -n-1.

Example:

rint([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2.,  1., -2.,  2.,  2.]

The storage type of rint output depends upon the input storage type:

  • rint(default) = default
  • rint(row_sparse) = row_sparse
  • rint(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L721

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.round(data=None, out=None, name=None, **kwargs)

Returns element-wise rounded value to the nearest integer of the input.

Example:

round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2.,  2., -2.,  2.,  2.]

The storage type of round output depends upon the input storage type:

  • round(default) = default
  • round(row_sparse) = row_sparse
  • round(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L700

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.rsqrt(data=None, out=None, name=None, **kwargs)

Returns element-wise inverse square-root value of the input.

\[rsqrt(x) = 1/\sqrt{x}\]

Example:

rsqrt([4,9,16]) = [0.5, 0.33333334, 0.25]

The storage type of rsqrt output is always dense

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L860

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sgd_mom_update(weight=None, grad=None, mom=None, lr=_Null, momentum=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, out=None, name=None, **kwargs)

Momentum update function for Stochastic Gradient Descent (SGD) optimizer.

Momentum update has better convergence rates on neural networks. Mathematically it looks like below:

\[\begin{split}v_1 = \alpha * \nabla J(W_0)\\ v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\ W_t = W_{t-1} + v_t\end{split}\]

It updates the weights using:

v = momentum * v - learning_rate * gradient
weight += v

Where the parameter momentum is the decay rate of momentum estimates at each epoch.

However, if grad’s storage type is row_sparse, lazy_update is True and weight’s storage type is the same as momentum’s storage type, only the row slices whose indices appear in grad.indices are updated (for both weight and momentum):

for row in gradient.indices:
    v[row] = momentum[row] * v[row] - learning_rate * gradient[row]
    weight[row] += v[row]

Defined in src/operator/optimizer_op.cc:L372

Parameters:
  • weight (NDArray) – Weight
  • grad (NDArray) – Gradient
  • mom (NDArray) – Momentum
  • lr (float, required) – Learning rate
  • momentum (float, optional, default=0) – The decay rate of momentum estimates at each epoch.
  • wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
  • rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
  • clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
  • lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse and both weight and momentum have the same stype
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sgd_update(weight=None, grad=None, lr=_Null, wd=_Null, rescale_grad=_Null, clip_gradient=_Null, lazy_update=_Null, out=None, name=None, **kwargs)

Update function for Stochastic Gradient Descent (SDG) optimizer.

It updates the weights using:

weight = weight - learning_rate * (gradient + wd * weight)

However, if gradient is of row_sparse storage type and lazy_update is True, only the row slices whose indices appear in grad.indices are updated:

for row in gradient.indices:
    weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])

Defined in src/operator/optimizer_op.cc:L331

Parameters:
  • weight (NDArray) – Weight
  • grad (NDArray) – Gradient
  • lr (float, required) – Learning rate
  • wd (float, optional, default=0) – Weight decay augments the objective function with a regularization term that penalizes large weights. The penalty scales with the square of the magnitude of each weight.
  • rescale_grad (float, optional, default=1) – Rescale gradient to grad = rescale_grad*grad.
  • clip_gradient (float, optional, default=-1) – Clip gradient to the range of [-clip_gradient, clip_gradient] If clip_gradient <= 0, gradient clipping is turned off. grad = max(min(grad, clip_gradient), -clip_gradient).
  • lazy_update (boolean, optional, default=1) – If true, lazy updates are applied if gradient’s stype is row_sparse.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sigmoid(data=None, out=None, name=None, **kwargs)

Computes sigmoid of x element-wise.

\[y = 1 / (1 + exp(-x))\]

The storage type of sigmoid output is always dense

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L101

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sign(data=None, out=None, name=None, **kwargs)

Returns element-wise sign of the input.

Example:

sign([-2, 0, 3]) = [-1, 0, 1]

The storage type of sign output depends upon the input storage type:

  • sign(default) = default
  • sign(row_sparse) = row_sparse
  • sign(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L681

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sin(data=None, out=None, name=None, **kwargs)

Computes the element-wise sine of the input array.

The input should be in radians (\(2\pi\) rad equals 360 degrees).

\[sin([0, \pi/4, \pi/2]) = [0, 0.707, 1]\]

The storage type of sin output depends upon the input storage type:

  • sin(default) = default
  • sin(row_sparse) = row_sparse
  • sin(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L46

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sinh(data=None, out=None, name=None, **kwargs)

Returns the hyperbolic sine of the input array, computed element-wise.

\[sinh(x) = 0.5\times(exp(x) - exp(-x))\]

The storage type of sinh output depends upon the input storage type:

  • sinh(default) = default
  • sinh(row_sparse) = row_sparse
  • sinh(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L201

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.slice(data=None, begin=_Null, end=_Null, step=_Null, out=None, name=None, **kwargs)

Slices a region of the array.

Note

crop is deprecated. Use slice instead.

This function returns a sliced array between the indices given by begin and end with the corresponding step.

For an input array of shape=(d_0, d_1, ..., d_n-1), slice operation with begin=(b_0, b_1...b_m-1), end=(e_0, e_1, ..., e_m-1), and step=(s_0, s_1, ..., s_m-1), where m <= n, results in an array with the shape (|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1).

The resulting array’s k-th dimension contains elements from the k-th dimension of the input array starting from index b_k (inclusive) with step s_k until reaching e_k (exclusive).

If the k-th elements are None in the sequence of begin, end, and step, the following rule will be used to set default values. If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k; else, set b_k=d_k-1, e_k=-1.

The storage type of slice output depends on storage types of inputs

  • slice(csr) = csr
  • otherwise, slice generates output with default storage

Note

When input data storage type is csr, it only supports step=(), or step=(None,), or step=(1,) to generate a csr output. For other step parameter values, it falls back to slicing a dense tensor.

Example:

x = [[  1.,   2.,   3.,   4.],
     [  5.,   6.,   7.,   8.],
     [  9.,  10.,  11.,  12.]]

slice(x, begin=(0,1), end=(2,4)) = [[ 2.,  3.,  4.],
                                   [ 6.,  7.,  8.]]
slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = [[9., 11.],
                                                          [5.,  7.],
                                                          [1.,  3.]]

Defined in src/operator/tensor/matrix_op.cc:L414

Parameters:
  • data (NDArray) – Source input
  • begin (Shape(tuple), required) – starting indices for the slice operation, supports negative indices.
  • end (Shape(tuple), required) – ending indices for the slice operation, supports negative indices.
  • step (Shape(tuple), optional, default=[]) – step for the slice operation, supports negative values.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sqrt(data=None, out=None, name=None, **kwargs)

Returns element-wise square-root value of the input.

\[\textrm{sqrt}(x) = \sqrt{x}\]

Example:

sqrt([4, 9, 16]) = [2, 3, 4]

The storage type of sqrt output depends upon the input storage type:

  • sqrt(default) = default
  • sqrt(row_sparse) = row_sparse
  • sqrt(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L840

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.square(data=None, out=None, name=None, **kwargs)

Returns element-wise squared value of the input.

\[square(x) = x^2\]

Example:

square([2, 3, 4]) = [4, 9, 16]

The storage type of square output depends upon the input storage type:

  • square(default) = default
  • square(row_sparse) = row_sparse
  • square(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L817

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.stop_gradient(data=None, out=None, name=None, **kwargs)

Stops gradient computation.

Stops the accumulated gradient of the inputs from flowing through this operator in the backward direction. In other words, this operator prevents the contribution of its inputs to be taken into account for computing gradients.

Example:

v1 = [1, 2]
v2 = [0, 1]
a = Variable('a')
b = Variable('b')
b_stop_grad = stop_gradient(3 * b)
loss = MakeLoss(b_stop_grad + a)

executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2))
executor.forward(is_train=True, a=v1, b=v2)
executor.outputs
[ 1.  5.]

executor.backward()
executor.grad_arrays
[ 0.  0.]
[ 1.  1.]

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L267

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.sum(data=None, axis=_Null, keepdims=_Null, exclude=_Null, out=None, name=None, **kwargs)

Computes the sum of array elements over given axes.

Note

sum and sum_axis are equivalent. For ndarray of csr storage type summation along axis 0 and axis 1 is supported. Setting keepdims or exclude to True will cause a fallback to dense operator.

Example:

data = [[[1, 2], [2, 3], [1, 3]],
        [[1, 4], [4, 3], [5, 2]],
        [[7, 1], [7, 2], [7, 3]]]

sum(data, axis=1)
[[  4.   8.]
 [ 10.   9.]
 [ 21.   6.]]

sum(data, axis=[1,2])
[ 12.  19.  27.]

data = [[1, 2, 0],
        [3, 0, 1],
        [4, 1, 0]]

csr = cast_storage(data, 'csr')

sum(csr, axis=0)
[ 8.  3.  1.]

sum(csr, axis=1)
[ 3.  4.  5.]

Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L116

Parameters:
  • data (NDArray) – The input
  • axis (Shape or None, optional, default=None) –

    The axis or axes along which to perform the reduction.

    The default, axis=(), will compute over all elements into a scalar array with shape (1,).

    If axis is int, a reduction is performed on a particular axis.

    If axis is a tuple of ints, a reduction is performed on all the axes specified in the tuple.

    If exclude is true, reduction will be performed on the axes that are NOT in axis instead.

    Negative values means indexing from right to left.

  • keepdims (boolean, optional, default=0) – If this is set to True, the reduced axes are left in the result as dimension with size one.
  • exclude (boolean, optional, default=0) – Whether to perform reduction on axis that are NOT in axis instead.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.tan(data=None, out=None, name=None, **kwargs)

Computes the element-wise tangent of the input array.

The input should be in radians (\(2\pi\) rad equals 360 degrees).

\[tan([0, \pi/4, \pi/2]) = [0, 1, -inf]\]

The storage type of tan output depends upon the input storage type:

  • tan(default) = default
  • tan(row_sparse) = row_sparse
  • tan(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L83

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.tanh(data=None, out=None, name=None, **kwargs)

Returns the hyperbolic tangent of the input array, computed element-wise.

\[tanh(x) = sinh(x) / cosh(x)\]

The storage type of tanh output depends upon the input storage type:

  • tanh(default) = default
  • tanh(row_sparse) = row_sparse
  • tanh(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L234

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.trunc(data=None, out=None, name=None, **kwargs)

Return the element-wise truncated value of the input.

The truncated value of the scalar x is the nearest integer i which is closer to zero than x is. In short, the fractional part of the signed number x is discarded.

Example:

trunc([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1.,  1.,  1.,  2.]

The storage type of trunc output depends upon the input storage type:

  • trunc(default) = default
  • trunc(row_sparse) = row_sparse
  • trunc(csr) = csr

Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L779

Parameters:
  • data (NDArray) – The input array.
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.where(condition=None, x=None, y=None, out=None, name=None, **kwargs)

Return the elements, either from x or y, depending on the condition.

Given three ndarrays, condition, x, and y, return an ndarray with the elements from x or y, depending on the elements from condition are true or false. x and y must have the same shape. If condition has the same shape as x, each element in the output array is from x if the corresponding element in the condition is true, and from y if false.

If condition does not have the same shape as x, it must be a 1D array whose size is the same as x’s first dimension size. Each row of the output array is from x’s row if the corresponding element from condition is true, and from y’s row if false.

Note that all non-zero values are interpreted as True in condition.

Examples:

x = [[1, 2], [3, 4]]
y = [[5, 6], [7, 8]]
cond = [[0, 1], [-1, 0]]

where(cond, x, y) = [[5, 2], [3, 8]]

csr_cond = cast_storage(cond, 'csr')

where(csr_cond, x, y) = [[5, 2], [3, 8]]

Defined in src/operator/tensor/control_flow_op.cc:L57

Parameters:
  • condition (NDArray) – condition array
  • x (NDArray) –
  • y (NDArray) –
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.zeros_like(data=None, out=None, name=None, **kwargs)

Return an array of zeros with the same shape, type and storage type as the input array.

The storage type of zeros_like output depends on the storage type of the input

  • zeros_like(row_sparse) = row_sparse
  • zeros_like(csr) = csr
  • zeros_like(default) = default

Examples:

x = [[ 1.,  1.,  1.],
     [ 1.,  1.,  1.]]

zeros_like(x) = [[ 0.,  0.,  0.],
                 [ 0.,  0.,  0.]]
Parameters:
  • data (NDArray) – The input
  • out (NDArray, optional) – The output NDArray to hold the result.
Returns:

out – The output of this function.

Return type:

NDArray or list of NDArrays

mxnet.ndarray.sparse.divide(lhs, rhs)[source]

Returns element-wise division of the input arrays with broadcasting.

Equivalent to lhs / rhs and mx.nd.broadcast_div(lhs, rhs) when shapes of lhs and rhs do not match. If lhs.shape == rhs.shape, this is equivalent to mx.nd.elemwise_div(lhs, rhs)

Note

If the corresponding dimensions of two arrays have the same size or one of them has size 1, then the arrays are broadcastable to a common shape.

Parameters:
  • lhs (scalar or mxnet.ndarray.sparse.array) – First array in division.
  • rhs (scalar or mxnet.ndarray.sparse.array) – Second array in division. The arrays to be divided. If lhs.shape != rhs.shape, they must be broadcastable to a common shape.
Returns:

The element-wise division of the input arrays.

Return type:

NDArray

Examples

>>> x = (mx.nd.ones((2,3))*6).tostype('csr')
>>> y = mx.nd.arange(2).reshape((2,1)) + 1
>>> z = mx.nd.arange(3) + 1
>>> x.asnumpy()
array([[ 6.,  6.,  6.],
       [ 6.,  6.,  6.]], dtype=float32)
>>> y.asnumpy()
array([[ 1.],
       [ 2.]], dtype=float32)
>>> z.asnumpy()
array([ 1.,  2.,  3.], dtype=float32)
>>> x/2

>>> (x/3).asnumpy()
array([[ 2.,  2.,  2.],
       [ 2.,  2.,  2.]], dtype=float32)
>>> (x/y).asnumpy()
array([[ 6.,  6.,  6.],
       [ 3.,  3.,  3.]], dtype=float32)
>>> mx.nd.sparse.divide(x,y).asnumpy()
array([[ 6.,  6.,  6.],
       [ 3.,  3.,  3.]], dtype=float32)
>>> (x/z).asnumpy()
array([[ 6.,  3.,  2.],
       [ 6.,  3.,  2.]], dtype=float32)
>>> mx.nd.sprase.divide(x,z).asnumpy()
array([[ 6.,  3.,  2.],
       [ 6.,  3.,  2.]], dtype=float32)
>>> z = z.reshape((1,3))
>>> z.asnumpy()
array([[ 1.,  2.,  3.]], dtype=float32)
>>> (x/z).asnumpy()
array([[ 6.,  3.,  2.],
       [ 6.,  3.,  2.]], dtype=float32)
>>> mx.nd.sparse.divide(x,z).asnumpy()
array([[ 6.,  3.,  2.],
       [ 6.,  3.,  2.]], dtype=float32)

Sparse NDArray API of MXNet.

mxnet.ndarray.sparse.zeros(stype, shape, ctx=None, dtype=None, **kwargs)[source]

Return a new array of given shape and type, filled with zeros.

Parameters:
  • stype (string) – The storage type of the empty array, such as ‘row_sparse’, ‘csr’, etc
  • shape (int or tuple of int) – The shape of the empty array
  • ctx (Context, optional) – An optional device context (default is the current default context)
  • dtype (str or numpy.dtype, optional) – An optional value type (default is float32)
Returns:

A created array

Return type:

RowSparseNDArray or CSRNDArray

Examples

>>> mx.nd.sparse.zeros('csr', (1,2))

>>> mx.nd.sparse.zeros('row_sparse', (1,2), ctx=mx.cpu(), dtype='float16').asnumpy()
array([[ 0.,  0.]], dtype=float16)
mxnet.ndarray.sparse.empty(stype, shape, ctx=None, dtype=None)[source]

Returns a new array of given shape and type, without initializing entries.

Parameters:
  • stype (string) – The storage type of the empty array, such as ‘row_sparse’, ‘csr’, etc
  • shape (int or tuple of int) – The shape of the empty array.
  • ctx (Context, optional) – An optional device context (default is the current default context).
  • dtype (str or numpy.dtype, optional) – An optional value type (default is float32).
Returns:

A created array.

Return type:

CSRNDArray or RowSparseNDArray

mxnet.ndarray.sparse.array(source_array, ctx=None, dtype=None)[source]

Creates a sparse array from any object exposing the array interface.

Parameters:
  • source_array (RowSparseNDArray, CSRNDArray or scipy.sparse.csr.csr_matrix) – The source sparse array
  • ctx (Context, optional) – The default context is source_array.context if source_array is an NDArray. The current default context otherwise.
  • dtype (str or numpy.dtype, optional) – The data type of the output array. The default dtype is source_array.dtype if source_array is an NDArray, numpy.ndarray or scipy.sparse.csr.csr_matrix, float32 otherwise.
Returns:

An array with the same contents as the source_array.

Return type:

RowSparseNDArray or CSRNDArray

Examples

>>> import scipy.sparse as spsp
>>> csr = spsp.csr_matrix((2, 100))
>>> mx.nd.sparse.array(csr)

>>> mx.nd.sparse.array(mx.nd.sparse.zeros('csr', (3, 2)))

>>> mx.nd.sparse.array(mx.nd.sparse.zeros('row_sparse', (3, 2)))

NDArray API of MXNet.

mxnet.ndarray.load(fname)[source]

Loads an array from file.

See more details in save.

Parameters:fname (str) – The filename.
Returns:Loaded data.
Return type:list of NDArray, RowSparseNDArray or CSRNDArray, or dict of str to NDArray, RowSparseNDArray or CSRNDArray
mxnet.ndarray.save(fname, data)[source]

Saves a list of arrays or a dict of str->array to file.

Examples of filenames:

  • /path/to/file
  • s3://my-bucket/path/to/file (if compiled with AWS S3 supports)
  • hdfs://path/to/file (if compiled with HDFS supports)
Parameters:

Examples

>>> x = mx.nd.zeros((2,3))
>>> y = mx.nd.ones((1,4))
>>> mx.nd.save('my_list', [x,y])
>>> mx.nd.save('my_dict', {'x':x, 'y':y})
>>> mx.nd.load('my_list')
[, ]
>>> mx.nd.load('my_dict')
{'y': , 'x': }