org.apache.mxnet

NDArrayBase

Related Doc: package mxnet

abstract class NDArrayBase extends AnyRef

Linear Supertypes
AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. NDArrayBase
  2. AnyRef
  3. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new NDArrayBase()

Abstract Value Members

  1. abstract def Activation(args: Any*): NDArrayFuncReturn

    Applies an activation function element-wise to the input.

    The following activation functions are supported:

    - relu: Rectified Linear Unit, :math:y = max(x, 0)
    - sigmoid: :math:y = \frac{1}{1 + exp(-x)}
    - tanh: Hyperbolic tangent, :math:y = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)}
    - softrelu: Soft ReLU, or SoftPlus, :math:y = log(1 + exp(x))
    - softsign: :math:y = \frac{x}{1 + abs(x)}



    Defined in src/operator/nn/activation.cc:L176

    Applies an activation function element-wise to the input.

    The following activation functions are supported:

    - relu: Rectified Linear Unit, :math:y = max(x, 0)
    - sigmoid: :math:y = \frac{1}{1 + exp(-x)}
    - tanh: Hyperbolic tangent, :math:y = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)}
    - softrelu: Soft ReLU, or SoftPlus, :math:y = log(1 + exp(x))
    - softsign: :math:y = \frac{x}{1 + abs(x)}



    Defined in src/operator/nn/activation.cc:L176

    returns

    org.apache.mxnet.NDArray

  2. abstract def Activation(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies an activation function element-wise to the input.

    The following activation functions are supported:

    - relu: Rectified Linear Unit, :math:y = max(x, 0)
    - sigmoid: :math:y = \frac{1}{1 + exp(-x)}
    - tanh: Hyperbolic tangent, :math:y = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)}
    - softrelu: Soft ReLU, or SoftPlus, :math:y = log(1 + exp(x))
    - softsign: :math:y = \frac{x}{1 + abs(x)}



    Defined in src/operator/nn/activation.cc:L176

    Applies an activation function element-wise to the input.

    The following activation functions are supported:

    - relu: Rectified Linear Unit, :math:y = max(x, 0)
    - sigmoid: :math:y = \frac{1}{1 + exp(-x)}
    - tanh: Hyperbolic tangent, :math:y = \frac{exp(x) - exp(-x)}{exp(x) + exp(-x)}
    - softrelu: Soft ReLU, or SoftPlus, :math:y = log(1 + exp(x))
    - softsign: :math:y = \frac{x}{1 + abs(x)}



    Defined in src/operator/nn/activation.cc:L176

    returns

    org.apache.mxnet.NDArray

  3. abstract def BatchNorm(args: Any*): NDArrayFuncReturn

    Batch normalization.

    Normalizes a data batch by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:

    ..

    Batch normalization.

    Normalizes a data batch by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:

    .. math::

    data\_mean[i] = mean(data[:,i,:,...]) \\
    data\_var[i] = var(data[:,i,:,...])

    Then compute the normalized output, which has the same shape as input, as following:

    .. math::

    out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]

    Both *mean* and *var* returns a scalar by treating the input as a vector.

    Assume the input has size *k* on axis 1, then both gamma and beta
    have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and
    the inverse of data_var, which are needed for the backward pass. Note that gradient of these
    two outputs are blocked.

    Besides the inputs and the outputs, this operator accepts two auxiliary
    states, moving_mean and moving_var, which are *k*-length
    vectors. They are global statistics for the whole dataset, which are updated
    by::

    moving_mean = moving_mean * momentum + data_mean * (1 - momentum)
    moving_var = moving_var * momentum + data_var * (1 - momentum)

    If use_global_stats is set to be true, then moving_mean and
    moving_var are used instead of data_mean and data_var to compute
    the output. It is often used during inference.

    The parameter axis specifies which axis of the input shape denotes
    the 'channel' (separately normalized groups). The default is 1. Specifying -1 sets the channel
    axis to be the last item in the input shape.

    Both gamma and beta are learnable parameters. But if fix_gamma is true,
    then set gamma to 1 and its gradient to 0.

    Note::

    When fix_gamma is set to True, no sparse support is provided. If fix_gamma is set to False,
    the sparse tensors will fallback.



    Defined in src/operator/nn/batch_norm.cc:L571

    returns

    org.apache.mxnet.NDArray

  4. abstract def BatchNorm(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Batch normalization.

    Normalizes a data batch by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:

    ..

    Batch normalization.

    Normalizes a data batch by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:

    .. math::

    data\_mean[i] = mean(data[:,i,:,...]) \\
    data\_var[i] = var(data[:,i,:,...])

    Then compute the normalized output, which has the same shape as input, as following:

    .. math::

    out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]

    Both *mean* and *var* returns a scalar by treating the input as a vector.

    Assume the input has size *k* on axis 1, then both gamma and beta
    have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and
    the inverse of data_var, which are needed for the backward pass. Note that gradient of these
    two outputs are blocked.

    Besides the inputs and the outputs, this operator accepts two auxiliary
    states, moving_mean and moving_var, which are *k*-length
    vectors. They are global statistics for the whole dataset, which are updated
    by::

    moving_mean = moving_mean * momentum + data_mean * (1 - momentum)
    moving_var = moving_var * momentum + data_var * (1 - momentum)

    If use_global_stats is set to be true, then moving_mean and
    moving_var are used instead of data_mean and data_var to compute
    the output. It is often used during inference.

    The parameter axis specifies which axis of the input shape denotes
    the 'channel' (separately normalized groups). The default is 1. Specifying -1 sets the channel
    axis to be the last item in the input shape.

    Both gamma and beta are learnable parameters. But if fix_gamma is true,
    then set gamma to 1 and its gradient to 0.

    Note::

    When fix_gamma is set to True, no sparse support is provided. If fix_gamma is set to False,
    the sparse tensors will fallback.



    Defined in src/operator/nn/batch_norm.cc:L571

    returns

    org.apache.mxnet.NDArray

  5. abstract def BatchNorm_v1(args: Any*): NDArrayFuncReturn

    Batch normalization.

    This operator is DEPRECATED.

    Batch normalization.

    This operator is DEPRECATED. Perform BatchNorm on the input.

    Normalizes a data batch by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:

    .. math::

    data\_mean[i] = mean(data[:,i,:,...]) \\
    data\_var[i] = var(data[:,i,:,...])

    Then compute the normalized output, which has the same shape as input, as following:

    .. math::

    out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]

    Both *mean* and *var* returns a scalar by treating the input as a vector.

    Assume the input has size *k* on axis 1, then both gamma and beta
    have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and
    data_var as well, which are needed for the backward pass.

    Besides the inputs and the outputs, this operator accepts two auxiliary
    states, moving_mean and moving_var, which are *k*-length
    vectors. They are global statistics for the whole dataset, which are updated
    by::

    moving_mean = moving_mean * momentum + data_mean * (1 - momentum)
    moving_var = moving_var * momentum + data_var * (1 - momentum)

    If use_global_stats is set to be true, then moving_mean and
    moving_var are used instead of data_mean and data_var to compute
    the output. It is often used during inference.

    Both gamma and beta are learnable parameters. But if fix_gamma is true,
    then set gamma to 1 and its gradient to 0.

    There's no sparse support for this operator, and it will exhibit problematic behavior if used with
    sparse tensors.



    Defined in src/operator/batch_norm_v1.cc:L95

    returns

    org.apache.mxnet.NDArray

  6. abstract def BatchNorm_v1(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Batch normalization.

    This operator is DEPRECATED.

    Batch normalization.

    This operator is DEPRECATED. Perform BatchNorm on the input.

    Normalizes a data batch by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis:

    .. math::

    data\_mean[i] = mean(data[:,i,:,...]) \\
    data\_var[i] = var(data[:,i,:,...])

    Then compute the normalized output, which has the same shape as input, as following:

    .. math::

    out[:,i,:,...] = \frac{data[:,i,:,...] - data\_mean[i]}{\sqrt{data\_var[i]+\epsilon}} * gamma[i] + beta[i]

    Both *mean* and *var* returns a scalar by treating the input as a vector.

    Assume the input has size *k* on axis 1, then both gamma and beta
    have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and
    data_var as well, which are needed for the backward pass.

    Besides the inputs and the outputs, this operator accepts two auxiliary
    states, moving_mean and moving_var, which are *k*-length
    vectors. They are global statistics for the whole dataset, which are updated
    by::

    moving_mean = moving_mean * momentum + data_mean * (1 - momentum)
    moving_var = moving_var * momentum + data_var * (1 - momentum)

    If use_global_stats is set to be true, then moving_mean and
    moving_var are used instead of data_mean and data_var to compute
    the output. It is often used during inference.

    Both gamma and beta are learnable parameters. But if fix_gamma is true,
    then set gamma to 1 and its gradient to 0.

    There's no sparse support for this operator, and it will exhibit problematic behavior if used with
    sparse tensors.



    Defined in src/operator/batch_norm_v1.cc:L95

    returns

    org.apache.mxnet.NDArray

  7. abstract def BilinearSampler(args: Any*): NDArrayFuncReturn

    Applies bilinear sampling to input feature map.

    Bilinear Sampling is the key of [NIPS2015] \"Spatial Transformer Networks\".

    Applies bilinear sampling to input feature map.

    Bilinear Sampling is the key of [NIPS2015] \"Spatial Transformer Networks\". The usage of the operator is very similar to remap function in OpenCV,
    except that the operator has the backward pass.

    Given :math:data and :math:grid, then the output is computed by

    .. math::
    x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \\
    y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \\
    output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src})

    :math:x_{dst}, :math:y_{dst} enumerate all spatial locations in :math:output, and :math:G() denotes the bilinear interpolation kernel.
    The out-boundary points will be padded with zeros.The shape of the output will be (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]).

    The operator assumes that :math:data has 'NCHW' layout and :math:grid has been normalized to [-1, 1].

    BilinearSampler often cooperates with GridGenerator which generates sampling grids for BilinearSampler.
    GridGenerator supports two kinds of transformation: affine and warp.
    If users want to design a CustomOp to manipulate :math:grid, please firstly refer to the code of GridGenerator.

    Example 1::

    ## Zoom out data two times
    data = array(4, 3, 6],
    [1, 8, 8, 9],
    [0, 4, 1, 5],
    [1, 0, 1, 3
    )

    affine_matrix = array(0, 0],
    [0, 2, 0
    )

    affine_matrix = reshape(affine_matrix, shape=(1, 6))

    grid = GridGenerator(data=affine_matrix, transform_type='affine', target_shape=(4, 4))

    out = BilinearSampler(data, grid)

    out
    0, 0, 0, 0],
    [ 0, 3.5, 6.5, 0],
    [ 0, 1.25, 2.5, 0],
    [ 0, 0, 0, 0]]]


    Example 2::

    ## shift data horizontally by -1 pixel

    data = array(4, 3, 6],
    [1, 8, 8, 9],
    [0, 4, 1, 5],
    [1, 0, 1, 3
    )

    warp_maxtrix = array(1, 1, 1],
    [1, 1, 1, 1],
    [1, 1, 1, 1],
    [1, 1, 1, 1]],
    0, 0, 0],
    [0, 0, 0, 0],
    [0, 0, 0, 0],
    [0, 0, 0, 0
    ]])

    grid = GridGenerator(data=warp_matrix, transform_type='warp')
    out = BilinearSampler(data, grid)

    out
    4, 3, 6, 0],
    [ 8, 8, 9, 0],
    [ 4, 1, 5, 0],
    [ 0, 1, 3, 0]]]


    Defined in src/operator/bilinear_sampler.cc:L245

    returns

    org.apache.mxnet.NDArray

  8. abstract def BilinearSampler(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies bilinear sampling to input feature map.

    Bilinear Sampling is the key of [NIPS2015] \"Spatial Transformer Networks\".

    Applies bilinear sampling to input feature map.

    Bilinear Sampling is the key of [NIPS2015] \"Spatial Transformer Networks\". The usage of the operator is very similar to remap function in OpenCV,
    except that the operator has the backward pass.

    Given :math:data and :math:grid, then the output is computed by

    .. math::
    x_{src} = grid[batch, 0, y_{dst}, x_{dst}] \\
    y_{src} = grid[batch, 1, y_{dst}, x_{dst}] \\
    output[batch, channel, y_{dst}, x_{dst}] = G(data[batch, channel, y_{src}, x_{src})

    :math:x_{dst}, :math:y_{dst} enumerate all spatial locations in :math:output, and :math:G() denotes the bilinear interpolation kernel.
    The out-boundary points will be padded with zeros.The shape of the output will be (data.shape[0], data.shape[1], grid.shape[2], grid.shape[3]).

    The operator assumes that :math:data has 'NCHW' layout and :math:grid has been normalized to [-1, 1].

    BilinearSampler often cooperates with GridGenerator which generates sampling grids for BilinearSampler.
    GridGenerator supports two kinds of transformation: affine and warp.
    If users want to design a CustomOp to manipulate :math:grid, please firstly refer to the code of GridGenerator.

    Example 1::

    ## Zoom out data two times
    data = array(4, 3, 6],
    [1, 8, 8, 9],
    [0, 4, 1, 5],
    [1, 0, 1, 3
    )

    affine_matrix = array(0, 0],
    [0, 2, 0
    )

    affine_matrix = reshape(affine_matrix, shape=(1, 6))

    grid = GridGenerator(data=affine_matrix, transform_type='affine', target_shape=(4, 4))

    out = BilinearSampler(data, grid)

    out
    0, 0, 0, 0],
    [ 0, 3.5, 6.5, 0],
    [ 0, 1.25, 2.5, 0],
    [ 0, 0, 0, 0]]]


    Example 2::

    ## shift data horizontally by -1 pixel

    data = array(4, 3, 6],
    [1, 8, 8, 9],
    [0, 4, 1, 5],
    [1, 0, 1, 3
    )

    warp_maxtrix = array(1, 1, 1],
    [1, 1, 1, 1],
    [1, 1, 1, 1],
    [1, 1, 1, 1]],
    0, 0, 0],
    [0, 0, 0, 0],
    [0, 0, 0, 0],
    [0, 0, 0, 0
    ]])

    grid = GridGenerator(data=warp_matrix, transform_type='warp')
    out = BilinearSampler(data, grid)

    out
    4, 3, 6, 0],
    [ 8, 8, 9, 0],
    [ 4, 1, 5, 0],
    [ 0, 1, 3, 0]]]


    Defined in src/operator/bilinear_sampler.cc:L245

    returns

    org.apache.mxnet.NDArray

  9. abstract def BlockGrad(args: Any*): NDArrayFuncReturn

    Stops gradient computation.

    Stops the accumulated gradient of the inputs from flowing through this operator
    in the backward direction.

    Stops gradient computation.

    Stops the accumulated gradient of the inputs from flowing through this operator
    in the backward direction. In other words, this operator prevents the contribution
    of its inputs to be taken into account for computing gradients.

    Example::

    v1 = [1, 2]
    v2 = [0, 1]
    a = Variable('a')
    b = Variable('b')
    b_stop_grad = stop_gradient(3 * b)
    loss = MakeLoss(b_stop_grad + a)

    executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2))
    executor.forward(is_train=True, a=v1, b=v2)
    executor.outputs
    [ 1. 5.]

    executor.backward()
    executor.grad_arrays
    [ 0. 0.]
    [ 1. 1.]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L265

    returns

    org.apache.mxnet.NDArray

  10. abstract def BlockGrad(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Stops gradient computation.

    Stops the accumulated gradient of the inputs from flowing through this operator
    in the backward direction.

    Stops gradient computation.

    Stops the accumulated gradient of the inputs from flowing through this operator
    in the backward direction. In other words, this operator prevents the contribution
    of its inputs to be taken into account for computing gradients.

    Example::

    v1 = [1, 2]
    v2 = [0, 1]
    a = Variable('a')
    b = Variable('b')
    b_stop_grad = stop_gradient(3 * b)
    loss = MakeLoss(b_stop_grad + a)

    executor = loss.simple_bind(ctx=cpu(), a=(1,2), b=(1,2))
    executor.forward(is_train=True, a=v1, b=v2)
    executor.outputs
    [ 1. 5.]

    executor.backward()
    executor.grad_arrays
    [ 0. 0.]
    [ 1. 1.]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L265

    returns

    org.apache.mxnet.NDArray

  11. abstract def Cast(args: Any*): NDArrayFuncReturn

    Casts all elements of the input to a new type.

    ..

    Casts all elements of the input to a new type.

    .. note:: Cast is deprecated. Use cast instead.

    Example::

    cast([0.9, 1.3], dtype='int32') = [0, 1]
    cast([1e20, 11.1], dtype='float16') = [inf, 11.09375]
    cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L504

    returns

    org.apache.mxnet.NDArray

  12. abstract def Cast(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Casts all elements of the input to a new type.

    ..

    Casts all elements of the input to a new type.

    .. note:: Cast is deprecated. Use cast instead.

    Example::

    cast([0.9, 1.3], dtype='int32') = [0, 1]
    cast([1e20, 11.1], dtype='float16') = [inf, 11.09375]
    cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L504

    returns

    org.apache.mxnet.NDArray

  13. abstract def Concat(args: Any*): NDArrayFuncReturn

    Joins input arrays along a given axis.

    ..

    Joins input arrays along a given axis.

    .. note:: Concat is deprecated. Use concat instead.

    The dimensions of the input arrays should be the same except the axis along
    which they will be concatenated.
    The dimension of the output array along the concatenated axis will be equal
    to the sum of the corresponding dimensions of the input arrays.

    The storage type of concat output depends on storage types of inputs

    - concat(csr, csr, ..., csr, dim=0) = csr
    - otherwise, concat generates output with default storage

    Example::

    x = 1,1],[2,2
    y = 3,3],[4,4],[5,5
    z = [7,7],[8,8

    concat(x,y,z,dim=0) = 1., 1.],
    [ 2., 2.],
    [ 3., 3.],
    [ 4., 4.],
    [ 5., 5.],
    [ 6., 6.],
    [ 7., 7.],
    [ 8., 8.


    Note that you cannot concat x,y,z along dimension 1 since dimension
    0 is not the same for all the input arrays.

    concat(y,z,dim=1) = 3., 3., 6., 6.],
    [ 4., 4., 7., 7.],
    [ 5., 5., 8., 8.




    Defined in src/operator/nn/concat.cc:L270

    returns

    org.apache.mxnet.NDArray

  14. abstract def Concat(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Joins input arrays along a given axis.

    ..

    Joins input arrays along a given axis.

    .. note:: Concat is deprecated. Use concat instead.

    The dimensions of the input arrays should be the same except the axis along
    which they will be concatenated.
    The dimension of the output array along the concatenated axis will be equal
    to the sum of the corresponding dimensions of the input arrays.

    The storage type of concat output depends on storage types of inputs

    - concat(csr, csr, ..., csr, dim=0) = csr
    - otherwise, concat generates output with default storage

    Example::

    x = 1,1],[2,2
    y = 3,3],[4,4],[5,5
    z = [7,7],[8,8

    concat(x,y,z,dim=0) = 1., 1.],
    [ 2., 2.],
    [ 3., 3.],
    [ 4., 4.],
    [ 5., 5.],
    [ 6., 6.],
    [ 7., 7.],
    [ 8., 8.


    Note that you cannot concat x,y,z along dimension 1 since dimension
    0 is not the same for all the input arrays.

    concat(y,z,dim=1) = 3., 3., 6., 6.],
    [ 4., 4., 7., 7.],
    [ 5., 5., 8., 8.




    Defined in src/operator/nn/concat.cc:L270

    returns

    org.apache.mxnet.NDArray

  15. abstract def Convolution(args: Any*): NDArrayFuncReturn

    Compute *N*-D convolution on *(N+2)*-D input.

    In the 2-D convolution, given input data with shape *(batch_size,
    channel, height, width)*, the output is computed by

    ..

    Compute *N*-D convolution on *(N+2)*-D input.

    In the 2-D convolution, given input data with shape *(batch_size,
    channel, height, width)*, the output is computed by

    .. math::

    out[n,i,:,:] = bias[i] + \sum_{j=0}^{channel} data[n,j,:,:] \star
    weight[i,j,:,:]

    where :math:\star is the 2-D cross-correlation operator.

    For general 2-D convolution, the shapes are

    - **data**: *(batch_size, channel, height, width)*
    - **weight**: *(num_filter, channel, kernel[0], kernel[1])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*.

    Define::

    f(x,k,p,s,d) = floor((x+2*p-d*(k-1)-1)/s)+1

    then we have::

    out_height=f(height, kernel[0], pad[0], stride[0], dilate[0])
    out_width=f(width, kernel[1], pad[1], stride[1], dilate[1])

    If no_bias is set to be true, then the bias term is ignored.

    The default data layout is *NCHW*, namely *(batch_size, channel, height,
    width)*. We can choose other layouts such as *NHWC*.

    If num_group is larger than 1, denoted by *g*, then split the input data
    evenly into *g* parts along the channel axis, and also evenly split weight
    along the first dimension. Next compute the convolution on the *i*-th part of
    the data with the *i*-th weight part. The output is obtained by concatenating all
    the *g* results.

    1-D convolution does not have *height* dimension but only *width* in space.

    - **data**: *(batch_size, channel, width)*
    - **weight**: *(num_filter, channel, kernel[0])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_width)*.

    3-D convolution adds an additional *depth* dimension besides *height* and
    *width*. The shapes are

    - **data**: *(batch_size, channel, depth, height, width)*
    - **weight**: *(num_filter, channel, kernel[0], kernel[1], kernel[2])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_depth, out_height, out_width)*.

    Both weight and bias are learnable parameters.

    There are other options to tune the performance.

    - **cudnn_tune**: enable this option leads to higher startup time but may give
    faster speed. Options are

    • **off**: no tuning
    • **limited_workspace**:run test and pick the fastest algorithm that doesn't
      exceed workspace limit.
    • **fastest**: pick the fastest algorithm and ignore workspace limit.
    • **None** (default): the behavior is determined by environment variable
      MXNET_CUDNN_AUTOTUNE_DEFAULT. 0 for off, 1 for limited workspace
      (default), 2 for fastest.

      - **workspace**: A large number leads to more (GPU) memory usage but may improve
      the performance.



      Defined in src/operator/nn/convolution.cc:L470
    returns

    org.apache.mxnet.NDArray

  16. abstract def Convolution(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Compute *N*-D convolution on *(N+2)*-D input.

    In the 2-D convolution, given input data with shape *(batch_size,
    channel, height, width)*, the output is computed by

    ..

    Compute *N*-D convolution on *(N+2)*-D input.

    In the 2-D convolution, given input data with shape *(batch_size,
    channel, height, width)*, the output is computed by

    .. math::

    out[n,i,:,:] = bias[i] + \sum_{j=0}^{channel} data[n,j,:,:] \star
    weight[i,j,:,:]

    where :math:\star is the 2-D cross-correlation operator.

    For general 2-D convolution, the shapes are

    - **data**: *(batch_size, channel, height, width)*
    - **weight**: *(num_filter, channel, kernel[0], kernel[1])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*.

    Define::

    f(x,k,p,s,d) = floor((x+2*p-d*(k-1)-1)/s)+1

    then we have::

    out_height=f(height, kernel[0], pad[0], stride[0], dilate[0])
    out_width=f(width, kernel[1], pad[1], stride[1], dilate[1])

    If no_bias is set to be true, then the bias term is ignored.

    The default data layout is *NCHW*, namely *(batch_size, channel, height,
    width)*. We can choose other layouts such as *NHWC*.

    If num_group is larger than 1, denoted by *g*, then split the input data
    evenly into *g* parts along the channel axis, and also evenly split weight
    along the first dimension. Next compute the convolution on the *i*-th part of
    the data with the *i*-th weight part. The output is obtained by concatenating all
    the *g* results.

    1-D convolution does not have *height* dimension but only *width* in space.

    - **data**: *(batch_size, channel, width)*
    - **weight**: *(num_filter, channel, kernel[0])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_width)*.

    3-D convolution adds an additional *depth* dimension besides *height* and
    *width*. The shapes are

    - **data**: *(batch_size, channel, depth, height, width)*
    - **weight**: *(num_filter, channel, kernel[0], kernel[1], kernel[2])*
    - **bias**: *(num_filter,)*
    - **out**: *(batch_size, num_filter, out_depth, out_height, out_width)*.

    Both weight and bias are learnable parameters.

    There are other options to tune the performance.

    - **cudnn_tune**: enable this option leads to higher startup time but may give
    faster speed. Options are

    • **off**: no tuning
    • **limited_workspace**:run test and pick the fastest algorithm that doesn't
      exceed workspace limit.
    • **fastest**: pick the fastest algorithm and ignore workspace limit.
    • **None** (default): the behavior is determined by environment variable
      MXNET_CUDNN_AUTOTUNE_DEFAULT. 0 for off, 1 for limited workspace
      (default), 2 for fastest.

      - **workspace**: A large number leads to more (GPU) memory usage but may improve
      the performance.



      Defined in src/operator/nn/convolution.cc:L470
    returns

    org.apache.mxnet.NDArray

  17. abstract def Convolution_v1(args: Any*): NDArrayFuncReturn

    This operator is DEPRECATED.

    This operator is DEPRECATED. Apply convolution to input then add a bias.

    returns

    org.apache.mxnet.NDArray

  18. abstract def Convolution_v1(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    This operator is DEPRECATED.

    This operator is DEPRECATED. Apply convolution to input then add a bias.

    returns

    org.apache.mxnet.NDArray

  19. abstract def Correlation(args: Any*): NDArrayFuncReturn

    Applies correlation to inputs.

    The correlation layer performs multiplicative patch comparisons between two feature maps.

    Given two multi-channel feature maps :math:f_{1}, f_{2}, with :math:w, :math:h, and :math:c being their width, height, and number of channels,
    the correlation layer lets the network compare each patch from :math:f_{1} with each patch from :math:f_{2}.

    For now we consider only a single comparison of two patches.

    Applies correlation to inputs.

    The correlation layer performs multiplicative patch comparisons between two feature maps.

    Given two multi-channel feature maps :math:f_{1}, f_{2}, with :math:w, :math:h, and :math:c being their width, height, and number of channels,
    the correlation layer lets the network compare each patch from :math:f_{1} with each patch from :math:f_{2}.

    For now we consider only a single comparison of two patches. The 'correlation' of two patches centered at :math:x_{1} in the first map and
    :math:x_{2} in the second map is then defined as:

    .. math::

    c(x_{1}, x_{2}) = \sum_{o \in [-k,k] \times [-k,k]} <f_{1}(x_{1} + o), f_{2}(x_{2} + o)>

    for a square patch of size :math:K:=2k+1.

    Note that the equation above is identical to one step of a convolution in neural networks, but instead of convolving data with a filter, it convolves data with other
    data. For this reason, it has no training weights.

    Computing :math:c(x_{1}, x_{2}) involves :math:c * K{2} multiplications. Comparing all patch combinations involves :math:w{2}*h{2} such computations.

    Given a maximum displacement :math:
    d, for each location :math:x_{1} it computes correlations :math:c(x_{1}, x_{2}) only in a neighborhood of size :math:D:=2d+1,
    by limiting the range of :math:
    x_{2}. We use strides :math:s_{1}, s_{2}, to quantize :math:x_{1} globally and to quantize :math:x_{2} within the neighborhood
    centered around :math:
    x_{1}.

    The final output is defined by the following expression:

    .. math::
    out[n, q, i, j] = c(x_{i, j}, x_{q})

    where :math:
    i and :math:j enumerate spatial locations in :math:f_{1}, and :math:q denotes the :math:q
    {th}
    neighborhood of :math:x_{i,j}.


    Defined in src/operator/correlation.cc:L198

    returns

    org.apache.mxnet.NDArray

  20. abstract def Correlation(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies correlation to inputs.

    The correlation layer performs multiplicative patch comparisons between two feature maps.

    Given two multi-channel feature maps :math:f_{1}, f_{2}, with :math:w, :math:h, and :math:c being their width, height, and number of channels,
    the correlation layer lets the network compare each patch from :math:f_{1} with each patch from :math:f_{2}.

    For now we consider only a single comparison of two patches.

    Applies correlation to inputs.

    The correlation layer performs multiplicative patch comparisons between two feature maps.

    Given two multi-channel feature maps :math:f_{1}, f_{2}, with :math:w, :math:h, and :math:c being their width, height, and number of channels,
    the correlation layer lets the network compare each patch from :math:f_{1} with each patch from :math:f_{2}.

    For now we consider only a single comparison of two patches. The 'correlation' of two patches centered at :math:x_{1} in the first map and
    :math:x_{2} in the second map is then defined as:

    .. math::

    c(x_{1}, x_{2}) = \sum_{o \in [-k,k] \times [-k,k]} <f_{1}(x_{1} + o), f_{2}(x_{2} + o)>

    for a square patch of size :math:K:=2k+1.

    Note that the equation above is identical to one step of a convolution in neural networks, but instead of convolving data with a filter, it convolves data with other
    data. For this reason, it has no training weights.

    Computing :math:c(x_{1}, x_{2}) involves :math:c * K{2} multiplications. Comparing all patch combinations involves :math:w{2}*h{2} such computations.

    Given a maximum displacement :math:
    d, for each location :math:x_{1} it computes correlations :math:c(x_{1}, x_{2}) only in a neighborhood of size :math:D:=2d+1,
    by limiting the range of :math:
    x_{2}. We use strides :math:s_{1}, s_{2}, to quantize :math:x_{1} globally and to quantize :math:x_{2} within the neighborhood
    centered around :math:
    x_{1}.

    The final output is defined by the following expression:

    .. math::
    out[n, q, i, j] = c(x_{i, j}, x_{q})

    where :math:
    i and :math:j enumerate spatial locations in :math:f_{1}, and :math:q denotes the :math:q
    {th}
    neighborhood of :math:x_{i,j}.


    Defined in src/operator/correlation.cc:L198

    returns

    org.apache.mxnet.NDArray

  21. abstract def Crop(args: Any*): NDArrayFuncReturn



    ..



    .. note:: Crop is deprecated. Use slice instead.

    Crop the 2nd and 3rd dim of input data, with the corresponding size of h_w or
    with width and height of the second input symbol, i.e., with one input, we need h_w to
    specify the crop height and width, otherwise the second input symbol's size will be used


    Defined in src/operator/crop.cc:L50

    returns

    org.apache.mxnet.NDArray

  22. abstract def Crop(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn



    ..



    .. note:: Crop is deprecated. Use slice instead.

    Crop the 2nd and 3rd dim of input data, with the corresponding size of h_w or
    with width and height of the second input symbol, i.e., with one input, we need h_w to
    specify the crop height and width, otherwise the second input symbol's size will be used


    Defined in src/operator/crop.cc:L50

    returns

    org.apache.mxnet.NDArray

  23. abstract def Custom(args: Any*): NDArrayFuncReturn

    Apply a custom operator implemented in a frontend language (like Python).

    Custom operators should override required methods like forward and backward.
    The custom operator must be registered before it can be used.
    Please check the tutorial here: https://mxnet.incubator.apache.org/faq/new_op.html.



    Defined in src/operator/custom/custom.cc:L547

    Apply a custom operator implemented in a frontend language (like Python).

    Custom operators should override required methods like forward and backward.
    The custom operator must be registered before it can be used.
    Please check the tutorial here: https://mxnet.incubator.apache.org/faq/new_op.html.



    Defined in src/operator/custom/custom.cc:L547

    returns

    org.apache.mxnet.NDArray

  24. abstract def Custom(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Apply a custom operator implemented in a frontend language (like Python).

    Custom operators should override required methods like forward and backward.
    The custom operator must be registered before it can be used.
    Please check the tutorial here: https://mxnet.incubator.apache.org/faq/new_op.html.



    Defined in src/operator/custom/custom.cc:L547

    Apply a custom operator implemented in a frontend language (like Python).

    Custom operators should override required methods like forward and backward.
    The custom operator must be registered before it can be used.
    Please check the tutorial here: https://mxnet.incubator.apache.org/faq/new_op.html.



    Defined in src/operator/custom/custom.cc:L547

    returns

    org.apache.mxnet.NDArray

  25. abstract def Deconvolution(args: Any*): NDArrayFuncReturn

    Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor.

    Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor. This operation can be seen as the gradient of Convolution operation with respect to its input. Convolution usually reduces the size of the input. Transposed convolution works the other way, going from a smaller input to a larger output while preserving the connectivity pattern.

    returns

    org.apache.mxnet.NDArray

  26. abstract def Deconvolution(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor.

    Computes 1D or 2D transposed convolution (aka fractionally strided convolution) of the input tensor. This operation can be seen as the gradient of Convolution operation with respect to its input. Convolution usually reduces the size of the input. Transposed convolution works the other way, going from a smaller input to a larger output while preserving the connectivity pattern.

    returns

    org.apache.mxnet.NDArray

  27. abstract def Dropout(args: Any*): NDArrayFuncReturn

    Applies dropout operation to input array.

    - During training, each element of the input is set to zero with probability p.
    The whole array is rescaled by :math:1/(1-p) to keep the expected
    sum of the input unchanged.

    - During testing, this operator does not change the input if mode is 'training'.
    If mode is 'always', the same computaion as during training will be applied.

    Example::

    random.seed(998)
    input_array = array(0.5, -0.5, 2., 7.],
    [2., -0.4, 7., 3., 0.2
    )
    a = symbol.Variable('a')
    dropout = symbol.Dropout(a, p = 0.2)
    executor = dropout.simple_bind(a = input_array.shape)

    ## If training
    executor.forward(is_train = True, a = input_array)
    executor.outputs
    3.75 0.625 -0. 2.5 8.75 ]
    [ 2.5 -0.5 8.75 3.75 0.


    ## If testing
    executor.forward(is_train = False, a = input_array)
    executor.outputs
    3. 0.5 -0.5 2. 7. ]
    [ 2. -0.4 7. 3. 0.2



    Defined in src/operator/nn/dropout.cc:L76

    Applies dropout operation to input array.

    - During training, each element of the input is set to zero with probability p.
    The whole array is rescaled by :math:1/(1-p) to keep the expected
    sum of the input unchanged.

    - During testing, this operator does not change the input if mode is 'training'.
    If mode is 'always', the same computaion as during training will be applied.

    Example::

    random.seed(998)
    input_array = array(0.5, -0.5, 2., 7.],
    [2., -0.4, 7., 3., 0.2
    )
    a = symbol.Variable('a')
    dropout = symbol.Dropout(a, p = 0.2)
    executor = dropout.simple_bind(a = input_array.shape)

    ## If training
    executor.forward(is_train = True, a = input_array)
    executor.outputs
    3.75 0.625 -0. 2.5 8.75 ]
    [ 2.5 -0.5 8.75 3.75 0.


    ## If testing
    executor.forward(is_train = False, a = input_array)
    executor.outputs
    3. 0.5 -0.5 2. 7. ]
    [ 2. -0.4 7. 3. 0.2



    Defined in src/operator/nn/dropout.cc:L76

    returns

    org.apache.mxnet.NDArray

  28. abstract def Dropout(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies dropout operation to input array.

    - During training, each element of the input is set to zero with probability p.
    The whole array is rescaled by :math:1/(1-p) to keep the expected
    sum of the input unchanged.

    - During testing, this operator does not change the input if mode is 'training'.
    If mode is 'always', the same computaion as during training will be applied.

    Example::

    random.seed(998)
    input_array = array(0.5, -0.5, 2., 7.],
    [2., -0.4, 7., 3., 0.2
    )
    a = symbol.Variable('a')
    dropout = symbol.Dropout(a, p = 0.2)
    executor = dropout.simple_bind(a = input_array.shape)

    ## If training
    executor.forward(is_train = True, a = input_array)
    executor.outputs
    3.75 0.625 -0. 2.5 8.75 ]
    [ 2.5 -0.5 8.75 3.75 0.


    ## If testing
    executor.forward(is_train = False, a = input_array)
    executor.outputs
    3. 0.5 -0.5 2. 7. ]
    [ 2. -0.4 7. 3. 0.2



    Defined in src/operator/nn/dropout.cc:L76

    Applies dropout operation to input array.

    - During training, each element of the input is set to zero with probability p.
    The whole array is rescaled by :math:1/(1-p) to keep the expected
    sum of the input unchanged.

    - During testing, this operator does not change the input if mode is 'training'.
    If mode is 'always', the same computaion as during training will be applied.

    Example::

    random.seed(998)
    input_array = array(0.5, -0.5, 2., 7.],
    [2., -0.4, 7., 3., 0.2
    )
    a = symbol.Variable('a')
    dropout = symbol.Dropout(a, p = 0.2)
    executor = dropout.simple_bind(a = input_array.shape)

    ## If training
    executor.forward(is_train = True, a = input_array)
    executor.outputs
    3.75 0.625 -0. 2.5 8.75 ]
    [ 2.5 -0.5 8.75 3.75 0.


    ## If testing
    executor.forward(is_train = False, a = input_array)
    executor.outputs
    3. 0.5 -0.5 2. 7. ]
    [ 2. -0.4 7. 3. 0.2



    Defined in src/operator/nn/dropout.cc:L76

    returns

    org.apache.mxnet.NDArray

  29. abstract def ElementWiseSum(args: Any*): NDArrayFuncReturn

    Adds all input arguments element-wise.

    ..

    Adds all input arguments element-wise.

    .. math::
    add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n

    add_n is potentially more efficient than calling add by n times.

    The storage type of add_n output depends on storage types of inputs

    - add_n(row_sparse, row_sparse, ..) = row_sparse
    - add_n(default, csr, default) = default
    - add_n(any input combinations longer than 4 (>4) with at least one default type) = default
    - otherwise, add_n falls all inputs back to default storage and generates default storage



    Defined in src/operator/tensor/elemwise_sum.cc:L156

    returns

    org.apache.mxnet.NDArray

  30. abstract def ElementWiseSum(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Adds all input arguments element-wise.

    ..

    Adds all input arguments element-wise.

    .. math::
    add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n

    add_n is potentially more efficient than calling add by n times.

    The storage type of add_n output depends on storage types of inputs

    - add_n(row_sparse, row_sparse, ..) = row_sparse
    - add_n(default, csr, default) = default
    - add_n(any input combinations longer than 4 (>4) with at least one default type) = default
    - otherwise, add_n falls all inputs back to default storage and generates default storage



    Defined in src/operator/tensor/elemwise_sum.cc:L156

    returns

    org.apache.mxnet.NDArray

  31. abstract def Embedding(args: Any*): NDArrayFuncReturn

    Maps integer indices to vector representations (embeddings).

    This operator maps words to real-valued vectors in a high-dimensional space,
    called word embeddings.

    Maps integer indices to vector representations (embeddings).

    This operator maps words to real-valued vectors in a high-dimensional space,
    called word embeddings. These embeddings can capture semantic and syntactic properties of the words.
    For example, it has been noted that in the learned embedding spaces, similar words tend
    to be close to each other and dissimilar words far apart.

    For an input array of shape (d1, ..., dK),
    the shape of an output array is (d1, ..., dK, output_dim).
    All the input values should be integers in the range [0, input_dim).

    If the input_dim is ip0 and output_dim is op0, then shape of the embedding weight matrix must be
    (ip0, op0).

    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last vector in an embedding matrix.

    Examples::

    input_dim = 4
    output_dim = 5

    // Each row in weight matrix y represents a word. So, y = (w0,w1,w2,w3)
    y = 0., 1., 2., 3., 4.],
    [ 5., 6., 7., 8., 9.],
    [ 10., 11., 12., 13., 14.],
    [ 15., 16., 17., 18., 19.


    // Input array x represents n-grams(2-gram). So, x = [(w1,w3), (w0,w2)]
    x = 1., 3.],
    [ 0., 2.


    // Mapped input x to its vector representation y.
    Embedding(x, y, 4, 5) = 5., 6., 7., 8., 9.],
    [ 15., 16., 17., 18., 19.]],

    0., 1., 2., 3., 4.],
    [ 10., 11., 12., 13., 14.
    ]


    The storage type of weight can be either row_sparse or default.

    .. Note::

    If "sparse_grad" is set to True, the storage type of gradient w.r.t weights will be
    "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
    and Adam. Note that by default lazy updates is turned on, which may perform differently
    from standard updates. For more details, please check the Optimization API at:
    https://mxnet.incubator.apache.org/api/python/optimization/optimization.html



    Defined in src/operator/tensor/indexing_op.cc:L239

    returns

    org.apache.mxnet.NDArray

  32. abstract def Embedding(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Maps integer indices to vector representations (embeddings).

    This operator maps words to real-valued vectors in a high-dimensional space,
    called word embeddings.

    Maps integer indices to vector representations (embeddings).

    This operator maps words to real-valued vectors in a high-dimensional space,
    called word embeddings. These embeddings can capture semantic and syntactic properties of the words.
    For example, it has been noted that in the learned embedding spaces, similar words tend
    to be close to each other and dissimilar words far apart.

    For an input array of shape (d1, ..., dK),
    the shape of an output array is (d1, ..., dK, output_dim).
    All the input values should be integers in the range [0, input_dim).

    If the input_dim is ip0 and output_dim is op0, then shape of the embedding weight matrix must be
    (ip0, op0).

    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last vector in an embedding matrix.

    Examples::

    input_dim = 4
    output_dim = 5

    // Each row in weight matrix y represents a word. So, y = (w0,w1,w2,w3)
    y = 0., 1., 2., 3., 4.],
    [ 5., 6., 7., 8., 9.],
    [ 10., 11., 12., 13., 14.],
    [ 15., 16., 17., 18., 19.


    // Input array x represents n-grams(2-gram). So, x = [(w1,w3), (w0,w2)]
    x = 1., 3.],
    [ 0., 2.


    // Mapped input x to its vector representation y.
    Embedding(x, y, 4, 5) = 5., 6., 7., 8., 9.],
    [ 15., 16., 17., 18., 19.]],

    0., 1., 2., 3., 4.],
    [ 10., 11., 12., 13., 14.
    ]


    The storage type of weight can be either row_sparse or default.

    .. Note::

    If "sparse_grad" is set to True, the storage type of gradient w.r.t weights will be
    "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
    and Adam. Note that by default lazy updates is turned on, which may perform differently
    from standard updates. For more details, please check the Optimization API at:
    https://mxnet.incubator.apache.org/api/python/optimization/optimization.html



    Defined in src/operator/tensor/indexing_op.cc:L239

    returns

    org.apache.mxnet.NDArray

  33. abstract def Flatten(args: Any*): NDArrayFuncReturn

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    ..

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    .. note:: Flatten is deprecated. Use flatten instead.

    For an input array with shape (d1, d2, ..., dk), flatten operation reshapes
    the input array into an output array of shape (d1, d2*...*dk).

    Note that the bahavior of this function is different from numpy.ndarray.flatten,
    which behaves similar to mxnet.ndarray.reshape((-1,)).

    Example::

    x = [1,2,3],
    [4,5,6],
    [7,8,9]
    ],
    [ [1,2,3],
    [4,5,6],
    [7,8,9]
    ,

    flatten(x) = 1., 2., 3., 4., 5., 6., 7., 8., 9.],
    [ 1., 2., 3., 4., 5., 6., 7., 8., 9.




    Defined in src/operator/tensor/matrix_op.cc:L258

    returns

    org.apache.mxnet.NDArray

  34. abstract def Flatten(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    ..

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    .. note:: Flatten is deprecated. Use flatten instead.

    For an input array with shape (d1, d2, ..., dk), flatten operation reshapes
    the input array into an output array of shape (d1, d2*...*dk).

    Note that the bahavior of this function is different from numpy.ndarray.flatten,
    which behaves similar to mxnet.ndarray.reshape((-1,)).

    Example::

    x = [1,2,3],
    [4,5,6],
    [7,8,9]
    ],
    [ [1,2,3],
    [4,5,6],
    [7,8,9]
    ,

    flatten(x) = 1., 2., 3., 4., 5., 6., 7., 8., 9.],
    [ 1., 2., 3., 4., 5., 6., 7., 8., 9.




    Defined in src/operator/tensor/matrix_op.cc:L258

    returns

    org.apache.mxnet.NDArray

  35. abstract def FullyConnected(args: Any*): NDArrayFuncReturn

    Applies a linear transformation: :math:Y = XW^T + b.

    If
    flatten is set to be true, then the shapes are:

    - **data**:
    (batch_size, x1, x2, ..., xn)
    - **weight**:
    (num_hidden, x1 * x2 * ... * xn)
    - **bias**:
    (num_hidden,)
    - **out**:
    (batch_size, num_hidden)

    If
    flatten is set to be false, then the shapes are:

    - **data**:
    (x1, x2, ..., xn, input_dim)
    - **weight**:
    (num_hidden, input_dim)
    - **bias**:
    (num_hidden,)
    - **out**:
    (x1, x2, ..., xn, num_hidden)

    The learnable parameters include both
    weight and bias.

    If
    no_bias is set to be true, then the bias term is ignored.

    Note that the operator also supports forward computation with
    row_sparse weight and bias,
    where the length of
    weight.indices and bias.indices must be equal to num_hidden.
    This could be used for model inference with
    row_sparse weights trained with SparseEmbedding.



    Defined in src/operator/nn/fully_connected.cc:L257

    Applies a linear transformation: :math:Y = XW^T + b.

    If
    flatten is set to be true, then the shapes are:

    - **data**:
    (batch_size, x1, x2, ..., xn)
    - **weight**:
    (num_hidden, x1 * x2 * ... * xn)
    - **bias**:
    (num_hidden,)
    - **out**:
    (batch_size, num_hidden)

    If
    flatten is set to be false, then the shapes are:

    - **data**:
    (x1, x2, ..., xn, input_dim)
    - **weight**:
    (num_hidden, input_dim)
    - **bias**:
    (num_hidden,)
    - **out**:
    (x1, x2, ..., xn, num_hidden)

    The learnable parameters include both
    weight and bias.

    If
    no_bias is set to be true, then the bias term is ignored.

    Note that the operator also supports forward computation with
    row_sparse weight and bias,
    where the length of
    weight.indices and bias.indices must be equal to num_hidden.
    This could be used for model inference with
    row_sparse weights trained with SparseEmbedding.



    Defined in src/operator/nn/fully_connected.cc:L257

    returns

    org.apache.mxnet.NDArray

  36. abstract def FullyConnected(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies a linear transformation: :math:Y = XW^T + b.

    If
    flatten is set to be true, then the shapes are:

    - **data**:
    (batch_size, x1, x2, ..., xn)
    - **weight**:
    (num_hidden, x1 * x2 * ... * xn)
    - **bias**:
    (num_hidden,)
    - **out**:
    (batch_size, num_hidden)

    If
    flatten is set to be false, then the shapes are:

    - **data**:
    (x1, x2, ..., xn, input_dim)
    - **weight**:
    (num_hidden, input_dim)
    - **bias**:
    (num_hidden,)
    - **out**:
    (x1, x2, ..., xn, num_hidden)

    The learnable parameters include both
    weight and bias.

    If
    no_bias is set to be true, then the bias term is ignored.

    Note that the operator also supports forward computation with
    row_sparse weight and bias,
    where the length of
    weight.indices and bias.indices must be equal to num_hidden.
    This could be used for model inference with
    row_sparse weights trained with SparseEmbedding.



    Defined in src/operator/nn/fully_connected.cc:L257

    Applies a linear transformation: :math:Y = XW^T + b.

    If
    flatten is set to be true, then the shapes are:

    - **data**:
    (batch_size, x1, x2, ..., xn)
    - **weight**:
    (num_hidden, x1 * x2 * ... * xn)
    - **bias**:
    (num_hidden,)
    - **out**:
    (batch_size, num_hidden)

    If
    flatten is set to be false, then the shapes are:

    - **data**:
    (x1, x2, ..., xn, input_dim)
    - **weight**:
    (num_hidden, input_dim)
    - **bias**:
    (num_hidden,)
    - **out**:
    (x1, x2, ..., xn, num_hidden)

    The learnable parameters include both
    weight and bias.

    If
    no_bias is set to be true, then the bias term is ignored.

    Note that the operator also supports forward computation with
    row_sparse weight and bias,
    where the length of
    weight.indices and bias.indices must be equal to num_hidden.
    This could be used for model inference with
    row_sparse weights trained with SparseEmbedding.



    Defined in src/operator/nn/fully_connected.cc:L257

    returns

    org.apache.mxnet.NDArray

  37. abstract def GridGenerator(args: Any*): NDArrayFuncReturn

    Generates 2D sampling grid for bilinear sampling.

    Generates 2D sampling grid for bilinear sampling.

    returns

    org.apache.mxnet.NDArray

  38. abstract def GridGenerator(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Generates 2D sampling grid for bilinear sampling.

    Generates 2D sampling grid for bilinear sampling.

    returns

    org.apache.mxnet.NDArray

  39. abstract def IdentityAttachKLSparseReg(args: Any*): NDArrayFuncReturn

    Apply a sparse regularization to the output a sigmoid activation function.

    Apply a sparse regularization to the output a sigmoid activation function.

    returns

    org.apache.mxnet.NDArray

  40. abstract def IdentityAttachKLSparseReg(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Apply a sparse regularization to the output a sigmoid activation function.

    Apply a sparse regularization to the output a sigmoid activation function.

    returns

    org.apache.mxnet.NDArray

  41. abstract def InstanceNorm(args: Any*): NDArrayFuncReturn

    Applies instance normalization to the n-dimensional input array.

    This operator takes an n-dimensional input array where (n>2) and normalizes
    the input using the following formula:

    ..

    Applies instance normalization to the n-dimensional input array.

    This operator takes an n-dimensional input array where (n>2) and normalizes
    the input using the following formula:

    .. math::

    out = \frac{x - mean[data]}{ \sqrt{Var[data]} + \epsilon} * gamma + beta

    This layer is similar to batch normalization layer (BatchNorm)
    with two differences: first, the normalization is
    carried out per example (instance), not over a batch. Second, the
    same normalization is applied both at test and train time. This
    operation is also known as contrast normalization.

    If the input data is of shape [batch, channel, spacial_dim1, spacial_dim2, ...],
    gamma and beta parameters must be vectors of shape [channel].

    This implementation is based on paper:

    .. [1] Instance Normalization: The Missing Ingredient for Fast Stylization,
    D. Ulyanov, A. Vedaldi, V. Lempitsky, 2016 (arXiv:1607.08022v2).

    Examples::

    // Input of shape (2,1,2)
    x = 1.1, 2.2]],
    3.3, 4.4]

    // gamma parameter of length 1
    gamma = [1.5]

    // beta parameter of length 1
    beta = [0.5]

    // Instance normalization is calculated with the above formula
    InstanceNorm(x,gamma,beta) = , 1.99752665]],
    1.99752724]



    Defined in src/operator/instance_norm.cc:L95

    returns

    org.apache.mxnet.NDArray

  42. abstract def InstanceNorm(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies instance normalization to the n-dimensional input array.

    This operator takes an n-dimensional input array where (n>2) and normalizes
    the input using the following formula:

    ..

    Applies instance normalization to the n-dimensional input array.

    This operator takes an n-dimensional input array where (n>2) and normalizes
    the input using the following formula:

    .. math::

    out = \frac{x - mean[data]}{ \sqrt{Var[data]} + \epsilon} * gamma + beta

    This layer is similar to batch normalization layer (BatchNorm)
    with two differences: first, the normalization is
    carried out per example (instance), not over a batch. Second, the
    same normalization is applied both at test and train time. This
    operation is also known as contrast normalization.

    If the input data is of shape [batch, channel, spacial_dim1, spacial_dim2, ...],
    gamma and beta parameters must be vectors of shape [channel].

    This implementation is based on paper:

    .. [1] Instance Normalization: The Missing Ingredient for Fast Stylization,
    D. Ulyanov, A. Vedaldi, V. Lempitsky, 2016 (arXiv:1607.08022v2).

    Examples::

    // Input of shape (2,1,2)
    x = 1.1, 2.2]],
    3.3, 4.4]

    // gamma parameter of length 1
    gamma = [1.5]

    // beta parameter of length 1
    beta = [0.5]

    // Instance normalization is calculated with the above formula
    InstanceNorm(x,gamma,beta) = , 1.99752665]],
    1.99752724]



    Defined in src/operator/instance_norm.cc:L95

    returns

    org.apache.mxnet.NDArray

  43. abstract def L2Normalization(args: Any*): NDArrayFuncReturn

    Normalize the input array using the L2 norm.

    For 1-D NDArray, it computes::

    out = data / sqrt(sum(data ** 2) + eps)

    For N-D NDArray, if the input array has shape (N, N, ..., N),

    with mode = instance, it normalizes each instance in the multidimensional
    array by its L2 norm.::

    for i in 0...N
    out[i,:,:,...,:] = data[i,:,:,...,:] / sqrt(sum(data[i,:,:,...,:] ** 2) + eps)

    with mode = channel, it normalizes each channel in the array by its L2 norm.::

    for i in 0...N
    out[:,i,:,...,:] = data[:,i,:,...,:] / sqrt(sum(data[:,i,:,...,:] ** 2) + eps)

    with mode = spatial, it normalizes the cross channel norm for each position
    in the array by its L2 norm.::

    for dim in 2...N
    for i in 0...N
    out[.....,i,...] = take(out, indices=i, axis=dim) / sqrt(sum(take(out, indices=i, axis=dim) ** 2) + eps)
    -dim-

    Example::

    x = [3,4]],
    [5,6]

    L2Normalization(x, mode='instance')

    Normalize the input array using the L2 norm.

    For 1-D NDArray, it computes::

    out = data / sqrt(sum(data ** 2) + eps)

    For N-D NDArray, if the input array has shape (N, N, ..., N),

    with mode = instance, it normalizes each instance in the multidimensional
    array by its L2 norm.::

    for i in 0...N
    out[i,:,:,...,:] = data[i,:,:,...,:] / sqrt(sum(data[i,:,:,...,:] ** 2) + eps)

    with mode = channel, it normalizes each channel in the array by its L2 norm.::

    for i in 0...N
    out[:,i,:,...,:] = data[:,i,:,...,:] / sqrt(sum(data[:,i,:,...,:] ** 2) + eps)

    with mode = spatial, it normalizes the cross channel norm for each position
    in the array by its L2 norm.::

    for dim in 2...N
    for i in 0...N
    out[.....,i,...] = take(out, indices=i, axis=dim) / sqrt(sum(take(out, indices=i, axis=dim) ** 2) + eps)
    -dim-

    Example::

    x = [3,4]],
    [5,6]

    L2Normalization(x, mode='instance')

    0.18257418 0.36514837]
    [ 0.54772252 0.73029673]]
    0.24077171 0.24077171]
    [ 0.60192931 0.72231513
    ]

    L2Normalization(x, mode='channel')

    0.31622776 0.44721359]
    [ 0.94868326 0.89442718]]
    0.37139067 0.31622776]
    [ 0.92847669 0.94868326
    ]

    L2Normalization(x, mode='spatial')

    0.44721359 0.89442718]
    [ 0.60000002 0.80000001]]
    0.70710677 0.70710677]
    [ 0.6401844 0.76822126
    ]



    Defined in src/operator/l2_normalization.cc:L98

    returns

    org.apache.mxnet.NDArray

  44. abstract def L2Normalization(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Normalize the input array using the L2 norm.

    For 1-D NDArray, it computes::

    out = data / sqrt(sum(data ** 2) + eps)

    For N-D NDArray, if the input array has shape (N, N, ..., N),

    with mode = instance, it normalizes each instance in the multidimensional
    array by its L2 norm.::

    for i in 0...N
    out[i,:,:,...,:] = data[i,:,:,...,:] / sqrt(sum(data[i,:,:,...,:] ** 2) + eps)

    with mode = channel, it normalizes each channel in the array by its L2 norm.::

    for i in 0...N
    out[:,i,:,...,:] = data[:,i,:,...,:] / sqrt(sum(data[:,i,:,...,:] ** 2) + eps)

    with mode = spatial, it normalizes the cross channel norm for each position
    in the array by its L2 norm.::

    for dim in 2...N
    for i in 0...N
    out[.....,i,...] = take(out, indices=i, axis=dim) / sqrt(sum(take(out, indices=i, axis=dim) ** 2) + eps)
    -dim-

    Example::

    x = [3,4]],
    [5,6]

    L2Normalization(x, mode='instance')

    Normalize the input array using the L2 norm.

    For 1-D NDArray, it computes::

    out = data / sqrt(sum(data ** 2) + eps)

    For N-D NDArray, if the input array has shape (N, N, ..., N),

    with mode = instance, it normalizes each instance in the multidimensional
    array by its L2 norm.::

    for i in 0...N
    out[i,:,:,...,:] = data[i,:,:,...,:] / sqrt(sum(data[i,:,:,...,:] ** 2) + eps)

    with mode = channel, it normalizes each channel in the array by its L2 norm.::

    for i in 0...N
    out[:,i,:,...,:] = data[:,i,:,...,:] / sqrt(sum(data[:,i,:,...,:] ** 2) + eps)

    with mode = spatial, it normalizes the cross channel norm for each position
    in the array by its L2 norm.::

    for dim in 2...N
    for i in 0...N
    out[.....,i,...] = take(out, indices=i, axis=dim) / sqrt(sum(take(out, indices=i, axis=dim) ** 2) + eps)
    -dim-

    Example::

    x = [3,4]],
    [5,6]

    L2Normalization(x, mode='instance')

    0.18257418 0.36514837]
    [ 0.54772252 0.73029673]]
    0.24077171 0.24077171]
    [ 0.60192931 0.72231513
    ]

    L2Normalization(x, mode='channel')

    0.31622776 0.44721359]
    [ 0.94868326 0.89442718]]
    0.37139067 0.31622776]
    [ 0.92847669 0.94868326
    ]

    L2Normalization(x, mode='spatial')

    0.44721359 0.89442718]
    [ 0.60000002 0.80000001]]
    0.70710677 0.70710677]
    [ 0.6401844 0.76822126
    ]



    Defined in src/operator/l2_normalization.cc:L98

    returns

    org.apache.mxnet.NDArray

  45. abstract def LRN(args: Any*): NDArrayFuncReturn

    Applies local response normalization to the input.

    The local response normalization layer performs "lateral inhibition" by normalizing
    over local input regions.

    If :math:a_{x,y}{i} is the activity of a neuron computed by applying kernel :math:i at position
    :math:
    (x, y) and then applying the ReLU nonlinearity, the response-normalized
    activity :math:
    b_{x,y}
    {i}
    is given by the expression:

    ..

    Applies local response normalization to the input.

    The local response normalization layer performs "lateral inhibition" by normalizing
    over local input regions.

    If :math:a_{x,y}{i} is the activity of a neuron computed by applying kernel :math:i at position
    :math:
    (x, y) and then applying the ReLU nonlinearity, the response-normalized
    activity :math:
    b_{x,y}
    {i}
    is given by the expression:

    .. math::
    b_{x,y}{i} = \frac{a_{x,y}{i}}{\Bigg({k + \frac{\alpha}{n} \sum_{j=max(0, i-\frac{n}{2})}{min(N-1, i+\frac{n}{2})} (a_{x,y}{j}){2}}\Bigg){\beta}}

    where the sum runs over :math:n "adjacent" kernel maps at the same spatial position, and :math:N is the total
    number of kernels in the layer.



    Defined in src/operator/nn/lrn.cc:L175

    returns

    org.apache.mxnet.NDArray

  46. abstract def LRN(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies local response normalization to the input.

    The local response normalization layer performs "lateral inhibition" by normalizing
    over local input regions.

    If :math:a_{x,y}{i} is the activity of a neuron computed by applying kernel :math:i at position
    :math:
    (x, y) and then applying the ReLU nonlinearity, the response-normalized
    activity :math:
    b_{x,y}
    {i}
    is given by the expression:

    ..

    Applies local response normalization to the input.

    The local response normalization layer performs "lateral inhibition" by normalizing
    over local input regions.

    If :math:a_{x,y}{i} is the activity of a neuron computed by applying kernel :math:i at position
    :math:
    (x, y) and then applying the ReLU nonlinearity, the response-normalized
    activity :math:
    b_{x,y}
    {i}
    is given by the expression:

    .. math::
    b_{x,y}{i} = \frac{a_{x,y}{i}}{\Bigg({k + \frac{\alpha}{n} \sum_{j=max(0, i-\frac{n}{2})}{min(N-1, i+\frac{n}{2})} (a_{x,y}{j}){2}}\Bigg){\beta}}

    where the sum runs over :math:n "adjacent" kernel maps at the same spatial position, and :math:N is the total
    number of kernels in the layer.



    Defined in src/operator/nn/lrn.cc:L175

    returns

    org.apache.mxnet.NDArray

  47. abstract def LayerNorm(args: Any*): NDArrayFuncReturn

    Layer normalization.

    Normalizes the channels of the input tensor by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis and then
    compute the normalized output, which has the same shape as input, as following:

    ..

    Layer normalization.

    Normalizes the channels of the input tensor by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis and then
    compute the normalized output, which has the same shape as input, as following:

    .. math::

    out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis) + \epsilon}} * gamma + beta

    Both gamma and beta are learnable parameters.

    Unlike BatchNorm and InstanceNorm, the *mean* and *var* are computed along the channel dimension.

    Assume the input has size *k* on axis 1, then both gamma and beta
    have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and
    data_std. Note that no gradient will be passed through these two outputs.

    The parameter axis specifies which axis of the input shape denotes
    the 'channel' (separately normalized groups). The default is -1, which sets the channel
    axis to be the last item in the input shape.



    Defined in src/operator/nn/layer_norm.cc:L94

    returns

    org.apache.mxnet.NDArray

  48. abstract def LayerNorm(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Layer normalization.

    Normalizes the channels of the input tensor by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis and then
    compute the normalized output, which has the same shape as input, as following:

    ..

    Layer normalization.

    Normalizes the channels of the input tensor by mean and variance, and applies a scale gamma as
    well as offset beta.

    Assume the input has more than one dimension and we normalize along axis 1.
    We first compute the mean and variance along this axis and then
    compute the normalized output, which has the same shape as input, as following:

    .. math::

    out = \frac{data - mean(data, axis)}{\sqrt{var(data, axis) + \epsilon}} * gamma + beta

    Both gamma and beta are learnable parameters.

    Unlike BatchNorm and InstanceNorm, the *mean* and *var* are computed along the channel dimension.

    Assume the input has size *k* on axis 1, then both gamma and beta
    have shape *(k,)*. If output_mean_var is set to be true, then outputs both data_mean and
    data_std. Note that no gradient will be passed through these two outputs.

    The parameter axis specifies which axis of the input shape denotes
    the 'channel' (separately normalized groups). The default is -1, which sets the channel
    axis to be the last item in the input shape.



    Defined in src/operator/nn/layer_norm.cc:L94

    returns

    org.apache.mxnet.NDArray

  49. abstract def LeakyReLU(args: Any*): NDArrayFuncReturn

    Applies Leaky rectified linear unit activation element-wise to the input.

    Leaky ReLUs attempt to fix the "dying ReLU" problem by allowing a small slope
    when the input is negative and has a slope of one when input is positive.

    The following modified ReLU Activation functions are supported:

    - *elu*: Exponential Linear Unit.

    Applies Leaky rectified linear unit activation element-wise to the input.

    Leaky ReLUs attempt to fix the "dying ReLU" problem by allowing a small slope
    when the input is negative and has a slope of one when input is positive.

    The following modified ReLU Activation functions are supported:

    - *elu*: Exponential Linear Unit. y = x > 0 ? x : slope * (exp(x)-1)
    - *leaky*: Leaky ReLU. y = x > 0 ? x : slope * x
    - *prelu*: Parametric ReLU. This is same as *leaky* except that slope is learnt during training.
    - *rrelu*: Randomized ReLU. same as *leaky* but the slope is uniformly and randomly chosen from
    *[lower_bound, upper_bound)* for training, while fixed to be
    *(lower_bound+upper_bound)/2* for inference.



    Defined in src/operator/leaky_relu.cc:L63

    returns

    org.apache.mxnet.NDArray

  50. abstract def LeakyReLU(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies Leaky rectified linear unit activation element-wise to the input.

    Leaky ReLUs attempt to fix the "dying ReLU" problem by allowing a small slope
    when the input is negative and has a slope of one when input is positive.

    The following modified ReLU Activation functions are supported:

    - *elu*: Exponential Linear Unit.

    Applies Leaky rectified linear unit activation element-wise to the input.

    Leaky ReLUs attempt to fix the "dying ReLU" problem by allowing a small slope
    when the input is negative and has a slope of one when input is positive.

    The following modified ReLU Activation functions are supported:

    - *elu*: Exponential Linear Unit. y = x > 0 ? x : slope * (exp(x)-1)
    - *leaky*: Leaky ReLU. y = x > 0 ? x : slope * x
    - *prelu*: Parametric ReLU. This is same as *leaky* except that slope is learnt during training.
    - *rrelu*: Randomized ReLU. same as *leaky* but the slope is uniformly and randomly chosen from
    *[lower_bound, upper_bound)* for training, while fixed to be
    *(lower_bound+upper_bound)/2* for inference.



    Defined in src/operator/leaky_relu.cc:L63

    returns

    org.apache.mxnet.NDArray

  51. abstract def LinearRegressionOutput(args: Any*): NDArrayFuncReturn

    Computes and optimizes for squared loss during backward propagation.
    Just outputs data during forward propagation.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the squared loss estimated over :math:n samples is defined as

    :math:\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2

    .. note::
    Use the LinearRegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - LinearRegressionOutput(default, default) = default
    - LinearRegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L92

    Computes and optimizes for squared loss during backward propagation.
    Just outputs data during forward propagation.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the squared loss estimated over :math:n samples is defined as

    :math:\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2

    .. note::
    Use the LinearRegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - LinearRegressionOutput(default, default) = default
    - LinearRegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L92

    returns

    org.apache.mxnet.NDArray

  52. abstract def LinearRegressionOutput(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes and optimizes for squared loss during backward propagation.
    Just outputs data during forward propagation.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the squared loss estimated over :math:n samples is defined as

    :math:\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2

    .. note::
    Use the LinearRegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - LinearRegressionOutput(default, default) = default
    - LinearRegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L92

    Computes and optimizes for squared loss during backward propagation.
    Just outputs data during forward propagation.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the squared loss estimated over :math:n samples is defined as

    :math:\text{SquaredLoss}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_2

    .. note::
    Use the LinearRegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - LinearRegressionOutput(default, default) = default
    - LinearRegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L92

    returns

    org.apache.mxnet.NDArray

  53. abstract def LogisticRegressionOutput(args: Any*): NDArrayFuncReturn

    Applies a logistic function to the input.

    The logistic function, also known as the sigmoid function, is computed as
    :math:\frac{1}{1+exp(-\textbf{x})}.

    Commonly, the sigmoid is used to squash the real-valued output of a linear model
    :math:wTx+b into the [0,1] range so that it can be interpreted as a probability.
    It is suitable for binary classification or probability prediction tasks.

    ..

    Applies a logistic function to the input.

    The logistic function, also known as the sigmoid function, is computed as
    :math:\frac{1}{1+exp(-\textbf{x})}.

    Commonly, the sigmoid is used to squash the real-valued output of a linear model
    :math:wTx+b into the [0,1] range so that it can be interpreted as a probability.
    It is suitable for binary classification or probability prediction tasks.

    .. note::
    Use the LogisticRegressionOutput as the final output layer of a net.

    The storage type of label can be default or csr

    - LogisticRegressionOutput(default, default) = default
    - LogisticRegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example.
    The parameter grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L148

    returns

    org.apache.mxnet.NDArray

  54. abstract def LogisticRegressionOutput(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies a logistic function to the input.

    The logistic function, also known as the sigmoid function, is computed as
    :math:\frac{1}{1+exp(-\textbf{x})}.

    Commonly, the sigmoid is used to squash the real-valued output of a linear model
    :math:wTx+b into the [0,1] range so that it can be interpreted as a probability.
    It is suitable for binary classification or probability prediction tasks.

    ..

    Applies a logistic function to the input.

    The logistic function, also known as the sigmoid function, is computed as
    :math:\frac{1}{1+exp(-\textbf{x})}.

    Commonly, the sigmoid is used to squash the real-valued output of a linear model
    :math:wTx+b into the [0,1] range so that it can be interpreted as a probability.
    It is suitable for binary classification or probability prediction tasks.

    .. note::
    Use the LogisticRegressionOutput as the final output layer of a net.

    The storage type of label can be default or csr

    - LogisticRegressionOutput(default, default) = default
    - LogisticRegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor 1/m, where m is the number of regression outputs of a training example.
    The parameter grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L148

    returns

    org.apache.mxnet.NDArray

  55. abstract def MAERegressionOutput(args: Any*): NDArrayFuncReturn

    Computes mean absolute error of the input.

    MAE is a risk metric corresponding to the expected value of the absolute error.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the mean absolute error (MAE) estimated over :math:n samples is defined as

    :math:\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1

    .. note::
    Use the MAERegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - MAERegressionOutput(default, default) = default
    - MAERegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L120

    Computes mean absolute error of the input.

    MAE is a risk metric corresponding to the expected value of the absolute error.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the mean absolute error (MAE) estimated over :math:n samples is defined as

    :math:\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1

    .. note::
    Use the MAERegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - MAERegressionOutput(default, default) = default
    - MAERegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L120

    returns

    org.apache.mxnet.NDArray

  56. abstract def MAERegressionOutput(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes mean absolute error of the input.

    MAE is a risk metric corresponding to the expected value of the absolute error.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the mean absolute error (MAE) estimated over :math:n samples is defined as

    :math:\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1

    .. note::
    Use the MAERegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - MAERegressionOutput(default, default) = default
    - MAERegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L120

    Computes mean absolute error of the input.

    MAE is a risk metric corresponding to the expected value of the absolute error.

    If :math:\hat{y}_i is the predicted value of the i-th sample, and :math:y_i is the corresponding target value,
    then the mean absolute error (MAE) estimated over :math:n samples is defined as

    :math:\text{MAE}(\textbf{Y}, \hat{\textbf{Y}} ) = \frac{1}{n} \sum_{i=0}^{n-1} \lVert \textbf{y}_i - \hat{\textbf{y}}_i \rVert_1

    .. note::
    Use the MAERegressionOutput as the final output layer of a net.

    The storage type of
    label can be default or csr

    - MAERegressionOutput(default, default) = default
    - MAERegressionOutput(default, csr) = default

    By default, gradients of this loss function are scaled by factor
    1/m, where m is the number of regression outputs of a training example.
    The parameter
    grad_scale can be used to change this scale to grad_scale/m.



    Defined in src/operator/regression_output.cc:L120

    returns

    org.apache.mxnet.NDArray

  57. abstract def MakeLoss(args: Any*): NDArrayFuncReturn

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function.

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function. Assume out is the
    predicted output and label is the true label, then the cross entropy can be defined as::

    cross_entropy = label * log(out) + (1 - label) * log(1 - out)
    loss = MakeLoss(cross_entropy)

    We will need to use MakeLoss when we are creating our own loss function or we want to
    combine multiple loss functions. Also we may want to stop some variables' gradients
    from backpropagation. See more detail in BlockGrad or stop_gradient.

    In addition, we can give a scale to the loss by setting grad_scale,
    so that the gradient of the loss will be rescaled in the backpropagation.

    .. note:: This operator should be used as a Symbol instead of NDArray.



    Defined in src/operator/make_loss.cc:L71

    returns

    org.apache.mxnet.NDArray

  58. abstract def MakeLoss(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function.

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function. Assume out is the
    predicted output and label is the true label, then the cross entropy can be defined as::

    cross_entropy = label * log(out) + (1 - label) * log(1 - out)
    loss = MakeLoss(cross_entropy)

    We will need to use MakeLoss when we are creating our own loss function or we want to
    combine multiple loss functions. Also we may want to stop some variables' gradients
    from backpropagation. See more detail in BlockGrad or stop_gradient.

    In addition, we can give a scale to the loss by setting grad_scale,
    so that the gradient of the loss will be rescaled in the backpropagation.

    .. note:: This operator should be used as a Symbol instead of NDArray.



    Defined in src/operator/make_loss.cc:L71

    returns

    org.apache.mxnet.NDArray

  59. abstract def Pad(args: Any*): NDArrayFuncReturn

    Pads an input array with a constant or edge values of the array.

    ..

    Pads an input array with a constant or edge values of the array.

    .. note:: Pad is deprecated. Use pad instead.

    .. note:: Current implementation only supports 4D and 5D input arrays with padding applied
    only on axes 1, 2 and 3. Expects axes 4 and 5 in pad_width to be zero.

    This operation pads an input array with either a constant_value or edge values
    along each axis of the input array. The amount of padding is specified by pad_width.

    pad_width is a tuple of integer padding widths for each axis of the format
    (before_1, after_1, ... , before_N, after_N). The pad_width should be of length 2*N
    where N is the number of dimensions of the array.

    For dimension N of the input array, before_N and after_N indicates how many values
    to add before and after the elements of the array along dimension N.
    The widths of the higher two dimensions before_1, after_1, before_2,
    after_2 must be 0.

    Example::

    x = 1. 2. 3.]
    [ 4. 5. 6.]]

    7. 8. 9.]
    [ 10. 11. 12.
    ]


    11. 12. 13.]
    [ 14. 15. 16.]]

    17. 18. 19.]
    [ 20. 21. 22.
    ]]

    pad(x,mode="edge", pad_width=(0,0,0,0,1,1,1,1)) =

    1. 1. 2. 3. 3.]
    [ 1. 1. 2. 3. 3.]
    [ 4. 4. 5. 6. 6.]
    [ 4. 4. 5. 6. 6.]]

    7. 7. 8. 9. 9.]
    [ 7. 7. 8. 9. 9.]
    [ 10. 10. 11. 12. 12.]
    [ 10. 10. 11. 12. 12.
    ]


    11. 11. 12. 13. 13.]
    [ 11. 11. 12. 13. 13.]
    [ 14. 14. 15. 16. 16.]
    [ 14. 14. 15. 16. 16.]]

    17. 17. 18. 19. 19.]
    [ 17. 17. 18. 19. 19.]
    [ 20. 20. 21. 22. 22.]
    [ 20. 20. 21. 22. 22.
    ]]

    pad(x, mode="constant", constant_value=0, pad_width=(0,0,0,0,1,1,1,1)) =

    0. 0. 0. 0. 0.]
    [ 0. 1. 2. 3. 0.]
    [ 0. 4. 5. 6. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 7. 8. 9. 0.]
    [ 0. 10. 11. 12. 0.]
    [ 0. 0. 0. 0. 0.
    ]


    0. 0. 0. 0. 0.]
    [ 0. 11. 12. 13. 0.]
    [ 0. 14. 15. 16. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 17. 18. 19. 0.]
    [ 0. 20. 21. 22. 0.]
    [ 0. 0. 0. 0. 0.
    ]]




    Defined in src/operator/pad.cc:L766

    returns

    org.apache.mxnet.NDArray

  60. abstract def Pad(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Pads an input array with a constant or edge values of the array.

    ..

    Pads an input array with a constant or edge values of the array.

    .. note:: Pad is deprecated. Use pad instead.

    .. note:: Current implementation only supports 4D and 5D input arrays with padding applied
    only on axes 1, 2 and 3. Expects axes 4 and 5 in pad_width to be zero.

    This operation pads an input array with either a constant_value or edge values
    along each axis of the input array. The amount of padding is specified by pad_width.

    pad_width is a tuple of integer padding widths for each axis of the format
    (before_1, after_1, ... , before_N, after_N). The pad_width should be of length 2*N
    where N is the number of dimensions of the array.

    For dimension N of the input array, before_N and after_N indicates how many values
    to add before and after the elements of the array along dimension N.
    The widths of the higher two dimensions before_1, after_1, before_2,
    after_2 must be 0.

    Example::

    x = 1. 2. 3.]
    [ 4. 5. 6.]]

    7. 8. 9.]
    [ 10. 11. 12.
    ]


    11. 12. 13.]
    [ 14. 15. 16.]]

    17. 18. 19.]
    [ 20. 21. 22.
    ]]

    pad(x,mode="edge", pad_width=(0,0,0,0,1,1,1,1)) =

    1. 1. 2. 3. 3.]
    [ 1. 1. 2. 3. 3.]
    [ 4. 4. 5. 6. 6.]
    [ 4. 4. 5. 6. 6.]]

    7. 7. 8. 9. 9.]
    [ 7. 7. 8. 9. 9.]
    [ 10. 10. 11. 12. 12.]
    [ 10. 10. 11. 12. 12.
    ]


    11. 11. 12. 13. 13.]
    [ 11. 11. 12. 13. 13.]
    [ 14. 14. 15. 16. 16.]
    [ 14. 14. 15. 16. 16.]]

    17. 17. 18. 19. 19.]
    [ 17. 17. 18. 19. 19.]
    [ 20. 20. 21. 22. 22.]
    [ 20. 20. 21. 22. 22.
    ]]

    pad(x, mode="constant", constant_value=0, pad_width=(0,0,0,0,1,1,1,1)) =

    0. 0. 0. 0. 0.]
    [ 0. 1. 2. 3. 0.]
    [ 0. 4. 5. 6. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 7. 8. 9. 0.]
    [ 0. 10. 11. 12. 0.]
    [ 0. 0. 0. 0. 0.
    ]


    0. 0. 0. 0. 0.]
    [ 0. 11. 12. 13. 0.]
    [ 0. 14. 15. 16. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 17. 18. 19. 0.]
    [ 0. 20. 21. 22. 0.]
    [ 0. 0. 0. 0. 0.
    ]]




    Defined in src/operator/pad.cc:L766

    returns

    org.apache.mxnet.NDArray

  61. abstract def Pooling(args: Any*): NDArrayFuncReturn

    Performs pooling on the input.

    The shapes for 1-D pooling are

    - **data**: *(batch_size, channel, width)*,
    - **out**: *(batch_size, num_filter, out_width)*.

    The shapes for 2-D pooling are

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling
    - **lp**: Lp pooling

    For 3-D pooling, an additional *depth* dimension is added before
    *height*.

    Performs pooling on the input.

    The shapes for 1-D pooling are

    - **data**: *(batch_size, channel, width)*,
    - **out**: *(batch_size, num_filter, out_width)*.

    The shapes for 2-D pooling are

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling
    - **lp**: Lp pooling

    For 3-D pooling, an additional *depth* dimension is added before
    *height*. Namely the input data will have shape *(batch_size, channel, depth,
    height, width)*.

    Notes on Lp pooling:

    Lp pooling was first introduced by this paper: https://arxiv.org/pdf/1204.3968.pdf.
    L-1 pooling is simply sum pooling, while L-inf pooling is simply max pooling.
    We can see that Lp pooling stands between those two, in practice the most common value for p is 2.

    For each window X, the mathematical expression for Lp pooling is:

    ..math::
    f(X) = \sqrt{p}{\sum\limits_{x \in X} x^p}



    Defined in src/operator/nn/pooling.cc:L383

    returns

    org.apache.mxnet.NDArray

  62. abstract def Pooling(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs pooling on the input.

    The shapes for 1-D pooling are

    - **data**: *(batch_size, channel, width)*,
    - **out**: *(batch_size, num_filter, out_width)*.

    The shapes for 2-D pooling are

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling
    - **lp**: Lp pooling

    For 3-D pooling, an additional *depth* dimension is added before
    *height*.

    Performs pooling on the input.

    The shapes for 1-D pooling are

    - **data**: *(batch_size, channel, width)*,
    - **out**: *(batch_size, num_filter, out_width)*.

    The shapes for 2-D pooling are

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling
    - **lp**: Lp pooling

    For 3-D pooling, an additional *depth* dimension is added before
    *height*. Namely the input data will have shape *(batch_size, channel, depth,
    height, width)*.

    Notes on Lp pooling:

    Lp pooling was first introduced by this paper: https://arxiv.org/pdf/1204.3968.pdf.
    L-1 pooling is simply sum pooling, while L-inf pooling is simply max pooling.
    We can see that Lp pooling stands between those two, in practice the most common value for p is 2.

    For each window X, the mathematical expression for Lp pooling is:

    ..math::
    f(X) = \sqrt{p}{\sum\limits_{x \in X} x^p}



    Defined in src/operator/nn/pooling.cc:L383

    returns

    org.apache.mxnet.NDArray

  63. abstract def Pooling_v1(args: Any*): NDArrayFuncReturn

    This operator is DEPRECATED.
    Perform pooling on the input.

    The shapes for 2-D pooling is

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling

    1-D pooling is special case of 2-D pooling with *weight=1* and
    *kernel[1]=1*.

    For 3-D pooling, an additional *depth* dimension is added before
    *height*.

    This operator is DEPRECATED.
    Perform pooling on the input.

    The shapes for 2-D pooling is

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling

    1-D pooling is special case of 2-D pooling with *weight=1* and
    *kernel[1]=1*.

    For 3-D pooling, an additional *depth* dimension is added before
    *height*. Namely the input data will have shape *(batch_size, channel, depth,
    height, width)*.



    Defined in src/operator/pooling_v1.cc:L104

    returns

    org.apache.mxnet.NDArray

  64. abstract def Pooling_v1(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    This operator is DEPRECATED.
    Perform pooling on the input.

    The shapes for 2-D pooling is

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling

    1-D pooling is special case of 2-D pooling with *weight=1* and
    *kernel[1]=1*.

    For 3-D pooling, an additional *depth* dimension is added before
    *height*.

    This operator is DEPRECATED.
    Perform pooling on the input.

    The shapes for 2-D pooling is

    - **data**: *(batch_size, channel, height, width)*
    - **out**: *(batch_size, num_filter, out_height, out_width)*, with::

    out_height = f(height, kernel[0], pad[0], stride[0])
    out_width = f(width, kernel[1], pad[1], stride[1])

    The definition of *f* depends on pooling_convention, which has two options:

    - **valid** (default)::

    f(x, k, p, s) = floor((x+2*p-k)/s)+1

    - **full**, which is compatible with Caffe::

    f(x, k, p, s) = ceil((x+2*p-k)/s)+1

    But global_pool is set to be true, then do a global pooling, namely reset
    kernel=(height, width).

    Three pooling options are supported by pool_type:

    - **avg**: average pooling
    - **max**: max pooling
    - **sum**: sum pooling

    1-D pooling is special case of 2-D pooling with *weight=1* and
    *kernel[1]=1*.

    For 3-D pooling, an additional *depth* dimension is added before
    *height*. Namely the input data will have shape *(batch_size, channel, depth,
    height, width)*.



    Defined in src/operator/pooling_v1.cc:L104

    returns

    org.apache.mxnet.NDArray

  65. abstract def RNN(args: Any*): NDArrayFuncReturn

    Applies recurrent layers to input data.

    Applies recurrent layers to input data. Currently, vanilla RNN, LSTM and GRU are
    implemented, with both multi-layer and bidirectional support.

    **Vanilla RNN**

    Applies a single-gate recurrent layer to input X. Two kinds of activation function are supported:
    ReLU and Tanh.

    With ReLU activation function:

    .. math::
    h_t = relu(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})

    With Tanh activtion function:

    .. math::
    h_t = \tanh(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})

    Reference paper: Finding structure in time - Elman, 1988.
    https://crl.ucsd.edu/~elman/Papers/fsit.pdf

    **LSTM**

    Long Short-Term Memory - Hochreiter, 1997. http://www.bioinf.jku.at/publications/older/2604.pdf

    .. math::
    \begin{array}{ll}
    i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\
    f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\
    g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) \\
    o_t = \mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\
    c_t = f_t * c_{(t-1)} + i_t * g_t \\
    h_t = o_t * \tanh(c_t)
    \end{array}

    **GRU**

    Gated Recurrent Unit - Cho et al. 2014. http://arxiv.org/abs/1406.1078

    The definition of GRU here is slightly different from paper but compatible with CUDNN.

    .. math::
    \begin{array}{ll}
    r_t = \mathrm{sigmoid}(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\
    z_t = \mathrm{sigmoid}(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\
    n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\
    h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \\
    \end{array}

    returns

    org.apache.mxnet.NDArray

  66. abstract def RNN(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies recurrent layers to input data.

    Applies recurrent layers to input data. Currently, vanilla RNN, LSTM and GRU are
    implemented, with both multi-layer and bidirectional support.

    **Vanilla RNN**

    Applies a single-gate recurrent layer to input X. Two kinds of activation function are supported:
    ReLU and Tanh.

    With ReLU activation function:

    .. math::
    h_t = relu(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})

    With Tanh activtion function:

    .. math::
    h_t = \tanh(W_{ih} * x_t + b_{ih} + W_{hh} * h_{(t-1)} + b_{hh})

    Reference paper: Finding structure in time - Elman, 1988.
    https://crl.ucsd.edu/~elman/Papers/fsit.pdf

    **LSTM**

    Long Short-Term Memory - Hochreiter, 1997. http://www.bioinf.jku.at/publications/older/2604.pdf

    .. math::
    \begin{array}{ll}
    i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\
    f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\
    g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) \\
    o_t = \mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\
    c_t = f_t * c_{(t-1)} + i_t * g_t \\
    h_t = o_t * \tanh(c_t)
    \end{array}

    **GRU**

    Gated Recurrent Unit - Cho et al. 2014. http://arxiv.org/abs/1406.1078

    The definition of GRU here is slightly different from paper but compatible with CUDNN.

    .. math::
    \begin{array}{ll}
    r_t = \mathrm{sigmoid}(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\
    z_t = \mathrm{sigmoid}(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\
    n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\
    h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \\
    \end{array}

    returns

    org.apache.mxnet.NDArray

  67. abstract def ROIPooling(args: Any*): NDArrayFuncReturn

    Performs region of interest(ROI) pooling on the input array.

    ROI pooling is a variant of a max pooling layer, in which the output size is fixed and
    region of interest is a parameter.

    Performs region of interest(ROI) pooling on the input array.

    ROI pooling is a variant of a max pooling layer, in which the output size is fixed and
    region of interest is a parameter. Its purpose is to perform max pooling on the inputs
    of non-uniform sizes to obtain fixed-size feature maps. ROI pooling is a neural-net
    layer mostly used in training a Fast R-CNN network for object detection.

    This operator takes a 4D feature map as an input array and region proposals as rois,
    then it pools over sub-regions of input and produces a fixed-sized output array
    regardless of the ROI size.

    To crop the feature map accordingly, you can resize the bounding box coordinates
    by changing the parameters rois and spatial_scale.

    The cropped feature maps are pooled by standard max pooling operation to a fixed size output
    indicated by a pooled_size parameter. batch_size will change to the number of region
    bounding boxes after ROIPooling.

    The size of each region of interest doesn't have to be perfectly divisible by
    the number of pooling sections(pooled_size).

    Example::

    x = 0., 1., 2., 3., 4., 5.],
    [ 6., 7., 8., 9., 10., 11.],
    [ 12., 13., 14., 15., 16., 17.],
    [ 18., 19., 20., 21., 22., 23.],
    [ 24., 25., 26., 27., 28., 29.],
    [ 30., 31., 32., 33., 34., 35.],
    [ 36., 37., 38., 39., 40., 41.],
    [ 42., 43., 44., 45., 46., 47.


    // region of interest i.e. bounding box coordinates.
    y = 0,0,0,4,4

    // returns array of shape (2,2) according to the given roi with max pooling.
    ROIPooling(x, y, (2,2), 1.0) = 14., 16.],
    [ 26., 28.


    // region of interest is changed due to the change in spacial_scale parameter.
    ROIPooling(x, y, (2,2), 0.7) = 7., 9.],
    [ 19., 21.




    Defined in src/operator/roi_pooling.cc:L295

    returns

    org.apache.mxnet.NDArray

  68. abstract def ROIPooling(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs region of interest(ROI) pooling on the input array.

    ROI pooling is a variant of a max pooling layer, in which the output size is fixed and
    region of interest is a parameter.

    Performs region of interest(ROI) pooling on the input array.

    ROI pooling is a variant of a max pooling layer, in which the output size is fixed and
    region of interest is a parameter. Its purpose is to perform max pooling on the inputs
    of non-uniform sizes to obtain fixed-size feature maps. ROI pooling is a neural-net
    layer mostly used in training a Fast R-CNN network for object detection.

    This operator takes a 4D feature map as an input array and region proposals as rois,
    then it pools over sub-regions of input and produces a fixed-sized output array
    regardless of the ROI size.

    To crop the feature map accordingly, you can resize the bounding box coordinates
    by changing the parameters rois and spatial_scale.

    The cropped feature maps are pooled by standard max pooling operation to a fixed size output
    indicated by a pooled_size parameter. batch_size will change to the number of region
    bounding boxes after ROIPooling.

    The size of each region of interest doesn't have to be perfectly divisible by
    the number of pooling sections(pooled_size).

    Example::

    x = 0., 1., 2., 3., 4., 5.],
    [ 6., 7., 8., 9., 10., 11.],
    [ 12., 13., 14., 15., 16., 17.],
    [ 18., 19., 20., 21., 22., 23.],
    [ 24., 25., 26., 27., 28., 29.],
    [ 30., 31., 32., 33., 34., 35.],
    [ 36., 37., 38., 39., 40., 41.],
    [ 42., 43., 44., 45., 46., 47.


    // region of interest i.e. bounding box coordinates.
    y = 0,0,0,4,4

    // returns array of shape (2,2) according to the given roi with max pooling.
    ROIPooling(x, y, (2,2), 1.0) = 14., 16.],
    [ 26., 28.


    // region of interest is changed due to the change in spacial_scale parameter.
    ROIPooling(x, y, (2,2), 0.7) = 7., 9.],
    [ 19., 21.




    Defined in src/operator/roi_pooling.cc:L295

    returns

    org.apache.mxnet.NDArray

  69. abstract def Reshape(args: Any*): NDArrayFuncReturn

    Reshapes the input array.

    ..

    Reshapes the input array.

    .. note:: Reshape is deprecated, use reshape

    Given an array and a shape, this function returns a copy of the array in the new shape.
    The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array.

    Example::

    reshape([1,2,3,4], shape=(2,2)) = [3,4

    Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below:

    - 0 copy this dimension from the input to the output shape.

    Example::

    • input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2)
    • input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4)

      - -1 infers the dimension of the output shape by using the remainder of the input dimensions
      keeping the size of the new array same as that of the input array.
      At most one dimension of shape can be -1.

      Example::

    • input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4)
    • input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8)
    • input shape = (2,3,4), shape=(-1,), output shape = (24,)

      - -2 copy all/remainder of the input dimensions to the output shape.

      Example::

    • input shape = (2,3,4), shape = (-2,), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1)

      - -3 use the product of two consecutive dimensions of the input shape as the output dimension.

      Example::

    • input shape = (2,3,4), shape = (-3,4), output shape = (6,4)
    • input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20)
    • input shape = (2,3,4), shape = (0,-3), output shape = (2,12)
    • input shape = (2,3,4), shape = (-3,-2), output shape = (6,4)

      - -4 split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1).

      Example::

    • input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4)
    • input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4)

      If the argument reverse is set to 1, then the special values are inferred from right to left.

      Example::

    • without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5)
    • with reverse=1, output shape will be (50,4).



      Defined in src/operator/tensor/matrix_op.cc:L168
    returns

    org.apache.mxnet.NDArray

  70. abstract def Reshape(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Reshapes the input array.

    ..

    Reshapes the input array.

    .. note:: Reshape is deprecated, use reshape

    Given an array and a shape, this function returns a copy of the array in the new shape.
    The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array.

    Example::

    reshape([1,2,3,4], shape=(2,2)) = [3,4

    Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below:

    - 0 copy this dimension from the input to the output shape.

    Example::

    • input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2)
    • input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4)

      - -1 infers the dimension of the output shape by using the remainder of the input dimensions
      keeping the size of the new array same as that of the input array.
      At most one dimension of shape can be -1.

      Example::

    • input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4)
    • input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8)
    • input shape = (2,3,4), shape=(-1,), output shape = (24,)

      - -2 copy all/remainder of the input dimensions to the output shape.

      Example::

    • input shape = (2,3,4), shape = (-2,), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1)

      - -3 use the product of two consecutive dimensions of the input shape as the output dimension.

      Example::

    • input shape = (2,3,4), shape = (-3,4), output shape = (6,4)
    • input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20)
    • input shape = (2,3,4), shape = (0,-3), output shape = (2,12)
    • input shape = (2,3,4), shape = (-3,-2), output shape = (6,4)

      - -4 split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1).

      Example::

    • input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4)
    • input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4)

      If the argument reverse is set to 1, then the special values are inferred from right to left.

      Example::

    • without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5)
    • with reverse=1, output shape will be (50,4).



      Defined in src/operator/tensor/matrix_op.cc:L168
    returns

    org.apache.mxnet.NDArray

  71. abstract def SVMOutput(args: Any*): NDArrayFuncReturn

    Computes support vector machine based transformation of the input.

    This tutorial demonstrates using SVM as output layer for classification instead of softmax:
    https://github.com/dmlc/mxnet/tree/master/example/svm_mnist.

    Computes support vector machine based transformation of the input.

    This tutorial demonstrates using SVM as output layer for classification instead of softmax:
    https://github.com/dmlc/mxnet/tree/master/example/svm_mnist.

    returns

    org.apache.mxnet.NDArray

  72. abstract def SVMOutput(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes support vector machine based transformation of the input.

    This tutorial demonstrates using SVM as output layer for classification instead of softmax:
    https://github.com/dmlc/mxnet/tree/master/example/svm_mnist.

    Computes support vector machine based transformation of the input.

    This tutorial demonstrates using SVM as output layer for classification instead of softmax:
    https://github.com/dmlc/mxnet/tree/master/example/svm_mnist.

    returns

    org.apache.mxnet.NDArray

  73. abstract def SequenceLast(args: Any*): NDArrayFuncReturn

    Takes the last element of a sequence.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns a (n-1)-dimensional array
    of the form [batch_size, other_feature_dims].

    Parameter sequence_length is used to handle variable-length sequences.

    Takes the last element of a sequence.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns a (n-1)-dimensional array
    of the form [batch_size, other_feature_dims].

    Parameter sequence_length is used to handle variable-length sequences. sequence_length should be
    an input array of positive ints of dimension [batch_size]. To use this parameter,
    set use_sequence_length to True, otherwise each example in the batch is assumed
    to have the max sequence length.

    .. note:: Alternatively, you can also use take operator.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.],
    [ 7., 8., 9.]],

    10., 11., 12.],
    [ 13., 14., 15.],
    [ 16., 17., 18.
    ,

    19., 20., 21.],
    [ 22., 23., 24.],
    [ 25., 26., 27.
    ]

    // returns last sequence when sequence_length parameter is not used
    SequenceLast(x) = 19., 20., 21.],
    [ 22., 23., 24.],
    [ 25., 26., 27.


    // sequence_length is used
    SequenceLast(x, sequence_length=[1,1,1], use_sequence_length=True) =
    1., 2., 3.],
    [ 4., 5., 6.],
    [ 7., 8., 9.


    // sequence_length is used
    SequenceLast(x, sequence_length=[1,2,3], use_sequence_length=True) =
    1., 2., 3.],
    [ 13., 14., 15.],
    [ 25., 26., 27.




    Defined in src/operator/sequence_last.cc:L92

    returns

    org.apache.mxnet.NDArray

  74. abstract def SequenceLast(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Takes the last element of a sequence.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns a (n-1)-dimensional array
    of the form [batch_size, other_feature_dims].

    Parameter sequence_length is used to handle variable-length sequences.

    Takes the last element of a sequence.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns a (n-1)-dimensional array
    of the form [batch_size, other_feature_dims].

    Parameter sequence_length is used to handle variable-length sequences. sequence_length should be
    an input array of positive ints of dimension [batch_size]. To use this parameter,
    set use_sequence_length to True, otherwise each example in the batch is assumed
    to have the max sequence length.

    .. note:: Alternatively, you can also use take operator.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.],
    [ 7., 8., 9.]],

    10., 11., 12.],
    [ 13., 14., 15.],
    [ 16., 17., 18.
    ,

    19., 20., 21.],
    [ 22., 23., 24.],
    [ 25., 26., 27.
    ]

    // returns last sequence when sequence_length parameter is not used
    SequenceLast(x) = 19., 20., 21.],
    [ 22., 23., 24.],
    [ 25., 26., 27.


    // sequence_length is used
    SequenceLast(x, sequence_length=[1,1,1], use_sequence_length=True) =
    1., 2., 3.],
    [ 4., 5., 6.],
    [ 7., 8., 9.


    // sequence_length is used
    SequenceLast(x, sequence_length=[1,2,3], use_sequence_length=True) =
    1., 2., 3.],
    [ 13., 14., 15.],
    [ 25., 26., 27.




    Defined in src/operator/sequence_last.cc:L92

    returns

    org.apache.mxnet.NDArray

  75. abstract def SequenceMask(args: Any*): NDArrayFuncReturn

    Sets all elements outside the sequence to a constant value.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences.

    Sets all elements outside the sequence to a constant value.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences. sequence_length
    should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set use_sequence_length to True,
    otherwise each example in the batch is assumed to have the max sequence length and
    this operator works as the identity operator.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // Batch 1
    B1 = 1., 2., 3.],
    [ 7., 8., 9.],
    [ 13., 14., 15.


    // Batch 2
    B2 = 4., 5., 6.],
    [ 10., 11., 12.],
    [ 16., 17., 18.


    // works as identity operator when sequence_length parameter is not used
    SequenceMask(x) = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // sequence_length [1,1] means 1 of each batch will be kept
    // and other rows are masked with default mask value = 0
    SequenceMask(x, sequence_length=[1,1], use_sequence_length=True) =
    1., 2., 3.],
    [ 4., 5., 6.]],

    0., 0., 0.],
    [ 0., 0., 0.
    ,

    0., 0., 0.],
    [ 0., 0., 0.
    ]

    // sequence_length [2,3] means 2 of batch B1 and 3 of batch B2 will be kept
    // and other rows are masked with value = 1
    SequenceMask(x, sequence_length=[2,3], use_sequence_length=True, value=1) =
    1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    1., 1., 1.],
    [ 16., 17., 18.
    ]



    Defined in src/operator/sequence_mask.cc:L114

    returns

    org.apache.mxnet.NDArray

  76. abstract def SequenceMask(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Sets all elements outside the sequence to a constant value.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences.

    Sets all elements outside the sequence to a constant value.

    This function takes an n-dimensional input array of the form
    [max_sequence_length, batch_size, other_feature_dims] and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences. sequence_length
    should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set use_sequence_length to True,
    otherwise each example in the batch is assumed to have the max sequence length and
    this operator works as the identity operator.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // Batch 1
    B1 = 1., 2., 3.],
    [ 7., 8., 9.],
    [ 13., 14., 15.


    // Batch 2
    B2 = 4., 5., 6.],
    [ 10., 11., 12.],
    [ 16., 17., 18.


    // works as identity operator when sequence_length parameter is not used
    SequenceMask(x) = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // sequence_length [1,1] means 1 of each batch will be kept
    // and other rows are masked with default mask value = 0
    SequenceMask(x, sequence_length=[1,1], use_sequence_length=True) =
    1., 2., 3.],
    [ 4., 5., 6.]],

    0., 0., 0.],
    [ 0., 0., 0.
    ,

    0., 0., 0.],
    [ 0., 0., 0.
    ]

    // sequence_length [2,3] means 2 of batch B1 and 3 of batch B2 will be kept
    // and other rows are masked with value = 1
    SequenceMask(x, sequence_length=[2,3], use_sequence_length=True, value=1) =
    1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    1., 1., 1.],
    [ 16., 17., 18.
    ]



    Defined in src/operator/sequence_mask.cc:L114

    returns

    org.apache.mxnet.NDArray

  77. abstract def SequenceReverse(args: Any*): NDArrayFuncReturn

    Reverses the elements of each sequence.

    This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims]
    and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences.
    sequence_length should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set use_sequence_length to True,
    otherwise each example in the batch is assumed to have the max sequence length.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // Batch 1
    B1 = 1., 2., 3.],
    [ 7., 8., 9.],
    [ 13., 14., 15.


    // Batch 2
    B2 = 4., 5., 6.],
    [ 10., 11., 12.],
    [ 16., 17., 18.


    // returns reverse sequence when sequence_length parameter is not used
    SequenceReverse(x) = 13., 14., 15.],
    [ 16., 17., 18.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    1., 2., 3.],
    [ 4., 5., 6.
    ]

    // sequence_length [2,2] means 2 rows of
    // both batch B1 and B2 will be reversed.
    SequenceReverse(x, sequence_length=[2,2], use_sequence_length=True) =
    7., 8., 9.],
    [ 10., 11., 12.]],

    1., 2., 3.],
    [ 4., 5., 6.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // sequence_length [2,3] means 2 of batch B2 and 3 of batch B3
    // will be reversed.
    SequenceReverse(x, sequence_length=[2,3], use_sequence_length=True) =
    7., 8., 9.],
    [ 16., 17., 18.]],

    1., 2., 3.],
    [ 10., 11., 12.
    ,

    13., 14, 15.],
    [ 4., 5., 6.
    ]



    Defined in src/operator/sequence_reverse.cc:L113

    Reverses the elements of each sequence.

    This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims]
    and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences.
    sequence_length should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set use_sequence_length to True,
    otherwise each example in the batch is assumed to have the max sequence length.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // Batch 1
    B1 = 1., 2., 3.],
    [ 7., 8., 9.],
    [ 13., 14., 15.


    // Batch 2
    B2 = 4., 5., 6.],
    [ 10., 11., 12.],
    [ 16., 17., 18.


    // returns reverse sequence when sequence_length parameter is not used
    SequenceReverse(x) = 13., 14., 15.],
    [ 16., 17., 18.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    1., 2., 3.],
    [ 4., 5., 6.
    ]

    // sequence_length [2,2] means 2 rows of
    // both batch B1 and B2 will be reversed.
    SequenceReverse(x, sequence_length=[2,2], use_sequence_length=True) =
    7., 8., 9.],
    [ 10., 11., 12.]],

    1., 2., 3.],
    [ 4., 5., 6.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // sequence_length [2,3] means 2 of batch B2 and 3 of batch B3
    // will be reversed.
    SequenceReverse(x, sequence_length=[2,3], use_sequence_length=True) =
    7., 8., 9.],
    [ 16., 17., 18.]],

    1., 2., 3.],
    [ 10., 11., 12.
    ,

    13., 14, 15.],
    [ 4., 5., 6.
    ]



    Defined in src/operator/sequence_reverse.cc:L113

    returns

    org.apache.mxnet.NDArray

  78. abstract def SequenceReverse(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Reverses the elements of each sequence.

    This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims]
    and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences.
    sequence_length should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set use_sequence_length to True,
    otherwise each example in the batch is assumed to have the max sequence length.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // Batch 1
    B1 = 1., 2., 3.],
    [ 7., 8., 9.],
    [ 13., 14., 15.


    // Batch 2
    B2 = 4., 5., 6.],
    [ 10., 11., 12.],
    [ 16., 17., 18.


    // returns reverse sequence when sequence_length parameter is not used
    SequenceReverse(x) = 13., 14., 15.],
    [ 16., 17., 18.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    1., 2., 3.],
    [ 4., 5., 6.
    ]

    // sequence_length [2,2] means 2 rows of
    // both batch B1 and B2 will be reversed.
    SequenceReverse(x, sequence_length=[2,2], use_sequence_length=True) =
    7., 8., 9.],
    [ 10., 11., 12.]],

    1., 2., 3.],
    [ 4., 5., 6.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // sequence_length [2,3] means 2 of batch B2 and 3 of batch B3
    // will be reversed.
    SequenceReverse(x, sequence_length=[2,3], use_sequence_length=True) =
    7., 8., 9.],
    [ 16., 17., 18.]],

    1., 2., 3.],
    [ 10., 11., 12.
    ,

    13., 14, 15.],
    [ 4., 5., 6.
    ]



    Defined in src/operator/sequence_reverse.cc:L113

    Reverses the elements of each sequence.

    This function takes an n-dimensional input array of the form [max_sequence_length, batch_size, other_feature_dims]
    and returns an array of the same shape.

    Parameter sequence_length is used to handle variable-length sequences.
    sequence_length should be an input array of positive ints of dimension [batch_size].
    To use this parameter, set use_sequence_length to True,
    otherwise each example in the batch is assumed to have the max sequence length.

    Example::

    x = 1., 2., 3.],
    [ 4., 5., 6.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // Batch 1
    B1 = 1., 2., 3.],
    [ 7., 8., 9.],
    [ 13., 14., 15.


    // Batch 2
    B2 = 4., 5., 6.],
    [ 10., 11., 12.],
    [ 16., 17., 18.


    // returns reverse sequence when sequence_length parameter is not used
    SequenceReverse(x) = 13., 14., 15.],
    [ 16., 17., 18.]],

    7., 8., 9.],
    [ 10., 11., 12.
    ,

    1., 2., 3.],
    [ 4., 5., 6.
    ]

    // sequence_length [2,2] means 2 rows of
    // both batch B1 and B2 will be reversed.
    SequenceReverse(x, sequence_length=[2,2], use_sequence_length=True) =
    7., 8., 9.],
    [ 10., 11., 12.]],

    1., 2., 3.],
    [ 4., 5., 6.
    ,

    13., 14., 15.],
    [ 16., 17., 18.
    ]

    // sequence_length [2,3] means 2 of batch B2 and 3 of batch B3
    // will be reversed.
    SequenceReverse(x, sequence_length=[2,3], use_sequence_length=True) =
    7., 8., 9.],
    [ 16., 17., 18.]],

    1., 2., 3.],
    [ 10., 11., 12.
    ,

    13., 14, 15.],
    [ 4., 5., 6.
    ]



    Defined in src/operator/sequence_reverse.cc:L113

    returns

    org.apache.mxnet.NDArray

  79. abstract def SliceChannel(args: Any*): NDArrayFuncReturn

    Splits an array along a particular axis into multiple sub-arrays.

    ..

    Splits an array along a particular axis into multiple sub-arrays.

    .. note:: SliceChannel is deprecated. Use split instead.

    **Note** that num_outputs should evenly divide the length of the axis
    along which to split the array.

    Example::

    x = 1.]
    [ 2.]]
    3.]
    [ 4.

    5.]
    [ 6.
    ]
    x.shape = (3, 2, 1)

    y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1)
    y = 1.]]
    3.
    5.]

    2.]]
    4.
    6.]

    y[0].shape = (3, 1, 1)

    z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.


    z[0].shape = (1, 2, 1)

    squeeze_axis=1 removes the axis with length 1 from the shapes of the output arrays.
    **Note** that setting squeeze_axis to 1 removes axis with length 1 only
    along the axis which it is split.
    Also squeeze_axis can be set to true only if input.shape[axis] == num_outputs.

    Example::

    z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.

    z[0].shape = (2 ,1 )



    Defined in src/operator/slice_channel.cc:L107

    returns

    org.apache.mxnet.NDArray

  80. abstract def SliceChannel(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Splits an array along a particular axis into multiple sub-arrays.

    ..

    Splits an array along a particular axis into multiple sub-arrays.

    .. note:: SliceChannel is deprecated. Use split instead.

    **Note** that num_outputs should evenly divide the length of the axis
    along which to split the array.

    Example::

    x = 1.]
    [ 2.]]
    3.]
    [ 4.

    5.]
    [ 6.
    ]
    x.shape = (3, 2, 1)

    y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1)
    y = 1.]]
    3.
    5.]

    2.]]
    4.
    6.]

    y[0].shape = (3, 1, 1)

    z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.


    z[0].shape = (1, 2, 1)

    squeeze_axis=1 removes the axis with length 1 from the shapes of the output arrays.
    **Note** that setting squeeze_axis to 1 removes axis with length 1 only
    along the axis which it is split.
    Also squeeze_axis can be set to true only if input.shape[axis] == num_outputs.

    Example::

    z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.

    z[0].shape = (2 ,1 )



    Defined in src/operator/slice_channel.cc:L107

    returns

    org.apache.mxnet.NDArray

  81. abstract def Softmax(args: Any*): NDArrayFuncReturn

    Please use SoftmaxOutput.

    ..

    Please use SoftmaxOutput.

    .. note::

    This operator has been renamed to SoftmaxOutput, which
    computes the gradient of cross-entropy loss w.r.t softmax output.
    To just compute softmax output, use the softmax operator.



    Defined in src/operator/softmax_output.cc:L138

    returns

    org.apache.mxnet.NDArray

  82. abstract def Softmax(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Please use SoftmaxOutput.

    ..

    Please use SoftmaxOutput.

    .. note::

    This operator has been renamed to SoftmaxOutput, which
    computes the gradient of cross-entropy loss w.r.t softmax output.
    To just compute softmax output, use the softmax operator.



    Defined in src/operator/softmax_output.cc:L138

    returns

    org.apache.mxnet.NDArray

  83. abstract def SoftmaxActivation(args: Any*): NDArrayFuncReturn

    Applies softmax activation to input.

    Applies softmax activation to input. This is intended for internal layers.

    .. note::

    This operator has been deprecated, please use softmax.

    If mode = instance, this operator will compute a softmax for each instance in the batch.
    This is the default mode.

    If mode = channel, this operator will compute a k-class softmax at each position
    of each instance, where k = num_channel. This mode can only be used when the input array
    has at least 3 dimensions.
    This can be used for fully convolutional network, image segmentation, etc.

    Example::

    >>> input_array = mx.nd.array(0.5, -0.5, 2., 7.],
    >>> [2., -.4, 7., 3., 0.2
    )
    >>> softmax_act = mx.nd.SoftmaxActivation(input_array)
    >>> print softmax_act.asnumpy()
    1.78322066e-02 1.46375655e-03 5.38485940e-04 6.56010211e-03 9.73605454e-01]
    [ 6.56221947e-03 5.95310994e-04 9.73919690e-01 1.78379621e-02 1.08472735e-03




    Defined in src/operator/nn/softmax_activation.cc:L59

    returns

    org.apache.mxnet.NDArray

  84. abstract def SoftmaxActivation(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies softmax activation to input.

    Applies softmax activation to input. This is intended for internal layers.

    .. note::

    This operator has been deprecated, please use softmax.

    If mode = instance, this operator will compute a softmax for each instance in the batch.
    This is the default mode.

    If mode = channel, this operator will compute a k-class softmax at each position
    of each instance, where k = num_channel. This mode can only be used when the input array
    has at least 3 dimensions.
    This can be used for fully convolutional network, image segmentation, etc.

    Example::

    >>> input_array = mx.nd.array(0.5, -0.5, 2., 7.],
    >>> [2., -.4, 7., 3., 0.2
    )
    >>> softmax_act = mx.nd.SoftmaxActivation(input_array)
    >>> print softmax_act.asnumpy()
    1.78322066e-02 1.46375655e-03 5.38485940e-04 6.56010211e-03 9.73605454e-01]
    [ 6.56221947e-03 5.95310994e-04 9.73919690e-01 1.78379621e-02 1.08472735e-03




    Defined in src/operator/nn/softmax_activation.cc:L59

    returns

    org.apache.mxnet.NDArray

  85. abstract def SoftmaxOutput(args: Any*): NDArrayFuncReturn

    Computes the gradient of cross entropy loss with respect to softmax output.

    - This operator computes the gradient in two steps.
    The cross entropy loss does not actually need to be computed.

    Computes the gradient of cross entropy loss with respect to softmax output.

    - This operator computes the gradient in two steps.
    The cross entropy loss does not actually need to be computed.

    • Applies softmax function on the input array.
    • Computes and returns the gradient of cross entropy loss w.r.t. the softmax output.

      - The softmax function, cross entropy loss and gradient is given by:

    • Softmax Function:

      .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}

    • Cross Entropy Function:

      .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)

    • The gradient of cross entropy loss w.r.t softmax output:

      .. math:: \text{gradient} = \text{output} - \text{label}

      - During forward propagation, the softmax function is computed for each instance in the input array.

      For general *N*-D input arrays with shape :math:(d_1, d_2, ..., d_n). The size is
      :math:s=d_1 \cdot d_2 \cdot \cdot \cdot d_n. We can use the parameters preserve_shape
      and multi_output to specify the way to compute softmax:

    • By default, preserve_shape is false. This operator will reshape the input array
      into a 2-D array with shape :math:(d_1, \frac{s}{d_1}) and then compute the softmax function for
      each row in the reshaped array, and afterwards reshape it back to the original shape
      :math:(d_1, d_2, ..., d_n).
    • If preserve_shape is true, the softmax function will be computed along
      the last axis (axis = -1).
    • If multi_output is true, the softmax function will be computed along
      the second axis (axis = 1).

      - During backward propagation, the gradient of cross-entropy loss w.r.t softmax output array is computed.
      The provided label can be a one-hot label array or a probability label array.

    • If the parameter use_ignore is true, ignore_label can specify input instances
      with a particular label to be ignored during backward propagation. **This has no effect when
      softmax output has same shape as label**.

      Example::

      data = 1,2,3,4],[2,2,2,2],[3,3,3,3],[4,4,4,4
      label = [1,0,2,3]
      ignore_label = 1
      SoftmaxOutput(data=data, label = label,\
      multi_output=true, use_ignore=true,\
      ignore_label=ignore_label)
      ## forward softmax output
      0.0320586 0.08714432 0.23688284 0.64391428]
      [ 0.25 0.25 0.25 0.25 ]
      [ 0.25 0.25 0.25 0.25 ]
      [ 0.25 0.25 0.25 0.25

      ## backward gradient output
      0. 0. 0. 0. ]
      [-0.75 0.25 0.25 0.25]
      [ 0.25 0.25 -0.75 0.25]
      [ 0.25 0.25 0.25 -0.75

      ## notice that the first row is all 0 because label[0] is 1, which is equal to ignore_label.

    • The parameter grad_scale can be used to rescale the gradient, which is often used to
      give each loss function different weights.

    • This operator also supports various ways to normalize the gradient by normalization,
      The normalization is applied if softmax output has different shape than the labels.
      The normalization mode can be set to the followings:

      • 'null': do nothing.
      • 'batch': divide the gradient by the batch size.
      • 'valid': divide the gradient by the number of instances which are not ignored.



        Defined in src/operator/softmax_output.cc:L123
    returns

    org.apache.mxnet.NDArray

  86. abstract def SoftmaxOutput(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the gradient of cross entropy loss with respect to softmax output.

    - This operator computes the gradient in two steps.
    The cross entropy loss does not actually need to be computed.

    Computes the gradient of cross entropy loss with respect to softmax output.

    - This operator computes the gradient in two steps.
    The cross entropy loss does not actually need to be computed.

    • Applies softmax function on the input array.
    • Computes and returns the gradient of cross entropy loss w.r.t. the softmax output.

      - The softmax function, cross entropy loss and gradient is given by:

    • Softmax Function:

      .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}

    • Cross Entropy Function:

      .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)

    • The gradient of cross entropy loss w.r.t softmax output:

      .. math:: \text{gradient} = \text{output} - \text{label}

      - During forward propagation, the softmax function is computed for each instance in the input array.

      For general *N*-D input arrays with shape :math:(d_1, d_2, ..., d_n). The size is
      :math:s=d_1 \cdot d_2 \cdot \cdot \cdot d_n. We can use the parameters preserve_shape
      and multi_output to specify the way to compute softmax:

    • By default, preserve_shape is false. This operator will reshape the input array
      into a 2-D array with shape :math:(d_1, \frac{s}{d_1}) and then compute the softmax function for
      each row in the reshaped array, and afterwards reshape it back to the original shape
      :math:(d_1, d_2, ..., d_n).
    • If preserve_shape is true, the softmax function will be computed along
      the last axis (axis = -1).
    • If multi_output is true, the softmax function will be computed along
      the second axis (axis = 1).

      - During backward propagation, the gradient of cross-entropy loss w.r.t softmax output array is computed.
      The provided label can be a one-hot label array or a probability label array.

    • If the parameter use_ignore is true, ignore_label can specify input instances
      with a particular label to be ignored during backward propagation. **This has no effect when
      softmax output has same shape as label**.

      Example::

      data = 1,2,3,4],[2,2,2,2],[3,3,3,3],[4,4,4,4
      label = [1,0,2,3]
      ignore_label = 1
      SoftmaxOutput(data=data, label = label,\
      multi_output=true, use_ignore=true,\
      ignore_label=ignore_label)
      ## forward softmax output
      0.0320586 0.08714432 0.23688284 0.64391428]
      [ 0.25 0.25 0.25 0.25 ]
      [ 0.25 0.25 0.25 0.25 ]
      [ 0.25 0.25 0.25 0.25

      ## backward gradient output
      0. 0. 0. 0. ]
      [-0.75 0.25 0.25 0.25]
      [ 0.25 0.25 -0.75 0.25]
      [ 0.25 0.25 0.25 -0.75

      ## notice that the first row is all 0 because label[0] is 1, which is equal to ignore_label.

    • The parameter grad_scale can be used to rescale the gradient, which is often used to
      give each loss function different weights.

    • This operator also supports various ways to normalize the gradient by normalization,
      The normalization is applied if softmax output has different shape than the labels.
      The normalization mode can be set to the followings:

      • 'null': do nothing.
      • 'batch': divide the gradient by the batch size.
      • 'valid': divide the gradient by the number of instances which are not ignored.



        Defined in src/operator/softmax_output.cc:L123
    returns

    org.apache.mxnet.NDArray

  87. abstract def SpatialTransformer(args: Any*): NDArrayFuncReturn

    Applies a spatial transformer to input feature map.

    Applies a spatial transformer to input feature map.

    returns

    org.apache.mxnet.NDArray

  88. abstract def SpatialTransformer(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies a spatial transformer to input feature map.

    Applies a spatial transformer to input feature map.

    returns

    org.apache.mxnet.NDArray

  89. abstract def SwapAxis(args: Any*): NDArrayFuncReturn

    Interchanges two axes of an array.

    Examples::

    x = 2, 3)
    swapaxes(x, 0, 1) = 1],
    [ 2],
    [ 3


    x = 0, 1],
    [ 2, 3]],
    4, 5],
    [ 6, 7
    ] // (2,2,2) array

    swapaxes(x, 0, 2) = 0, 4],
    [ 2, 6]],
    1, 5],
    [ 3, 7
    ]


    Defined in src/operator/swapaxis.cc:L70

    Interchanges two axes of an array.

    Examples::

    x = 2, 3)
    swapaxes(x, 0, 1) = 1],
    [ 2],
    [ 3


    x = 0, 1],
    [ 2, 3]],
    4, 5],
    [ 6, 7
    ] // (2,2,2) array

    swapaxes(x, 0, 2) = 0, 4],
    [ 2, 6]],
    1, 5],
    [ 3, 7
    ]


    Defined in src/operator/swapaxis.cc:L70

    returns

    org.apache.mxnet.NDArray

  90. abstract def SwapAxis(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Interchanges two axes of an array.

    Examples::

    x = 2, 3)
    swapaxes(x, 0, 1) = 1],
    [ 2],
    [ 3


    x = 0, 1],
    [ 2, 3]],
    4, 5],
    [ 6, 7
    ] // (2,2,2) array

    swapaxes(x, 0, 2) = 0, 4],
    [ 2, 6]],
    1, 5],
    [ 3, 7
    ]


    Defined in src/operator/swapaxis.cc:L70

    Interchanges two axes of an array.

    Examples::

    x = 2, 3)
    swapaxes(x, 0, 1) = 1],
    [ 2],
    [ 3


    x = 0, 1],
    [ 2, 3]],
    4, 5],
    [ 6, 7
    ] // (2,2,2) array

    swapaxes(x, 0, 2) = 0, 4],
    [ 2, 6]],
    1, 5],
    [ 3, 7
    ]


    Defined in src/operator/swapaxis.cc:L70

    returns

    org.apache.mxnet.NDArray

  91. abstract def UpSampling(args: Any*): NDArrayFuncReturn

    Performs nearest neighbor/bilinear up sampling to inputs.

    Performs nearest neighbor/bilinear up sampling to inputs.

    returns

    org.apache.mxnet.NDArray

  92. abstract def UpSampling(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs nearest neighbor/bilinear up sampling to inputs.

    Performs nearest neighbor/bilinear up sampling to inputs.

    returns

    org.apache.mxnet.NDArray

  93. abstract def abs(args: Any*): NDArrayFuncReturn

    Returns element-wise absolute value of the input.

    Example::

    abs([-2, 0, 3]) = [2, 0, 3]

    The storage type of abs output depends upon the input storage type:

    Returns element-wise absolute value of the input.

    Example::

    abs([-2, 0, 3]) = [2, 0, 3]

    The storage type of abs output depends upon the input storage type:

    • abs(default) = default
    • abs(row_sparse) = row_sparse
    • abs(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L578
    returns

    org.apache.mxnet.NDArray

  94. abstract def abs(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise absolute value of the input.

    Example::

    abs([-2, 0, 3]) = [2, 0, 3]

    The storage type of abs output depends upon the input storage type:

    Returns element-wise absolute value of the input.

    Example::

    abs([-2, 0, 3]) = [2, 0, 3]

    The storage type of abs output depends upon the input storage type:

    • abs(default) = default
    • abs(row_sparse) = row_sparse
    • abs(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L578
    returns

    org.apache.mxnet.NDArray

  95. abstract def adam_update(args: Any*): NDArrayFuncReturn

    Update function for Adam optimizer.

    Update function for Adam optimizer. Adam is seen as a generalization
    of AdaGrad.

    Adam update consists of the following steps, where g represents gradient and m, v
    are 1st and 2nd order moment estimates (mean and variance).

    .. math::

    g_t = \nabla J(W_{t-1})\\
    m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t\\
    v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\
    W_t = W_{t-1} - \alpha \frac{ m_t }{ \sqrt{ v_t } + \epsilon }

    It updates the weights using::

    m = beta1*m + (1-beta1)*grad
    v = beta2*v + (1-beta2)*(grad**2)
    w += - learning_rate * m / (sqrt(v) + epsilon)

    However, if grad's storage type is row_sparse, lazy_update is True and the storage
    type of weight is the same as those of m and v,
    only the row slices whose indices appear in grad.indices are updated (for w, m and v)::

    for row in grad.indices:
    m[row] = beta1*m[row] + (1-beta1)*grad[row]
    v[row] = beta2*v[row] + (1-beta2)*(grad[row]**2)
    w[row] += - learning_rate * m[row] / (sqrt(v[row]) + epsilon)



    Defined in src/operator/optimizer_op.cc:L495

    returns

    org.apache.mxnet.NDArray

  96. abstract def adam_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Update function for Adam optimizer.

    Update function for Adam optimizer. Adam is seen as a generalization
    of AdaGrad.

    Adam update consists of the following steps, where g represents gradient and m, v
    are 1st and 2nd order moment estimates (mean and variance).

    .. math::

    g_t = \nabla J(W_{t-1})\\
    m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t\\
    v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\
    W_t = W_{t-1} - \alpha \frac{ m_t }{ \sqrt{ v_t } + \epsilon }

    It updates the weights using::

    m = beta1*m + (1-beta1)*grad
    v = beta2*v + (1-beta2)*(grad**2)
    w += - learning_rate * m / (sqrt(v) + epsilon)

    However, if grad's storage type is row_sparse, lazy_update is True and the storage
    type of weight is the same as those of m and v,
    only the row slices whose indices appear in grad.indices are updated (for w, m and v)::

    for row in grad.indices:
    m[row] = beta1*m[row] + (1-beta1)*grad[row]
    v[row] = beta2*v[row] + (1-beta2)*(grad[row]**2)
    w[row] += - learning_rate * m[row] / (sqrt(v[row]) + epsilon)



    Defined in src/operator/optimizer_op.cc:L495

    returns

    org.apache.mxnet.NDArray

  97. abstract def add_n(args: Any*): NDArrayFuncReturn

    Adds all input arguments element-wise.

    ..

    Adds all input arguments element-wise.

    .. math::
    add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n

    add_n is potentially more efficient than calling add by n times.

    The storage type of add_n output depends on storage types of inputs

    - add_n(row_sparse, row_sparse, ..) = row_sparse
    - add_n(default, csr, default) = default
    - add_n(any input combinations longer than 4 (>4) with at least one default type) = default
    - otherwise, add_n falls all inputs back to default storage and generates default storage



    Defined in src/operator/tensor/elemwise_sum.cc:L156

    returns

    org.apache.mxnet.NDArray

  98. abstract def add_n(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Adds all input arguments element-wise.

    ..

    Adds all input arguments element-wise.

    .. math::
    add\_n(a_1, a_2, ..., a_n) = a_1 + a_2 + ... + a_n

    add_n is potentially more efficient than calling add by n times.

    The storage type of add_n output depends on storage types of inputs

    - add_n(row_sparse, row_sparse, ..) = row_sparse
    - add_n(default, csr, default) = default
    - add_n(any input combinations longer than 4 (>4) with at least one default type) = default
    - otherwise, add_n falls all inputs back to default storage and generates default storage



    Defined in src/operator/tensor/elemwise_sum.cc:L156

    returns

    org.apache.mxnet.NDArray

  99. abstract def arccos(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse cosine of the input array.

    The input should be in range [-1, 1].
    The output is in the closed interval :math:[0, \pi]

    ..

    Returns element-wise inverse cosine of the input array.

    The input should be in range [-1, 1].
    The output is in the closed interval :math:[0, \pi]

    .. math::
    arccos([-1, -.707, 0, .707, 1]) = [\pi, 3\pi/4, \pi/2, \pi/4, 0]

    The storage type of arccos output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L123

    returns

    org.apache.mxnet.NDArray

  100. abstract def arccos(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse cosine of the input array.

    The input should be in range [-1, 1].
    The output is in the closed interval :math:[0, \pi]

    ..

    Returns element-wise inverse cosine of the input array.

    The input should be in range [-1, 1].
    The output is in the closed interval :math:[0, \pi]

    .. math::
    arccos([-1, -.707, 0, .707, 1]) = [\pi, 3\pi/4, \pi/2, \pi/4, 0]

    The storage type of arccos output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L123

    returns

    org.apache.mxnet.NDArray

  101. abstract def arccosh(args: Any*): NDArrayFuncReturn

    Returns the element-wise inverse hyperbolic cosine of the input array, \
    computed element-wise.

    The storage type of arccosh output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L264

    Returns the element-wise inverse hyperbolic cosine of the input array, \
    computed element-wise.

    The storage type of arccosh output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L264

    returns

    org.apache.mxnet.NDArray

  102. abstract def arccosh(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the element-wise inverse hyperbolic cosine of the input array, \
    computed element-wise.

    The storage type of arccosh output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L264

    Returns the element-wise inverse hyperbolic cosine of the input array, \
    computed element-wise.

    The storage type of arccosh output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L264

    returns

    org.apache.mxnet.NDArray

  103. abstract def arcsin(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse sine of the input array.

    The input should be in the range [-1, 1].
    The output is in the closed interval of [:math:-\pi/2, :math:\pi/2].

    ..

    Returns element-wise inverse sine of the input array.

    The input should be in the range [-1, 1].
    The output is in the closed interval of [:math:-\pi/2, :math:\pi/2].

    .. math::
    arcsin([-1, -.707, 0, .707, 1]) = [-\pi/2, -\pi/4, 0, \pi/4, \pi/2]

    The storage type of arcsin output depends upon the input storage type:

    • arcsin(default) = default
    • arcsin(row_sparse) = row_sparse
    • arcsin(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L104
    returns

    org.apache.mxnet.NDArray

  104. abstract def arcsin(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse sine of the input array.

    The input should be in the range [-1, 1].
    The output is in the closed interval of [:math:-\pi/2, :math:\pi/2].

    ..

    Returns element-wise inverse sine of the input array.

    The input should be in the range [-1, 1].
    The output is in the closed interval of [:math:-\pi/2, :math:\pi/2].

    .. math::
    arcsin([-1, -.707, 0, .707, 1]) = [-\pi/2, -\pi/4, 0, \pi/4, \pi/2]

    The storage type of arcsin output depends upon the input storage type:

    • arcsin(default) = default
    • arcsin(row_sparse) = row_sparse
    • arcsin(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L104
    returns

    org.apache.mxnet.NDArray

  105. abstract def arcsinh(args: Any*): NDArrayFuncReturn

    Returns the element-wise inverse hyperbolic sine of the input array, \
    computed element-wise.

    The storage type of arcsinh output depends upon the input storage type:

    Returns the element-wise inverse hyperbolic sine of the input array, \
    computed element-wise.

    The storage type of arcsinh output depends upon the input storage type:

    • arcsinh(default) = default
    • arcsinh(row_sparse) = row_sparse
    • arcsinh(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L250
    returns

    org.apache.mxnet.NDArray

  106. abstract def arcsinh(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the element-wise inverse hyperbolic sine of the input array, \
    computed element-wise.

    The storage type of arcsinh output depends upon the input storage type:

    Returns the element-wise inverse hyperbolic sine of the input array, \
    computed element-wise.

    The storage type of arcsinh output depends upon the input storage type:

    • arcsinh(default) = default
    • arcsinh(row_sparse) = row_sparse
    • arcsinh(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L250
    returns

    org.apache.mxnet.NDArray

  107. abstract def arctan(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse tangent of the input array.

    The output is in the closed interval :math:[-\pi/2, \pi/2]

    ..

    Returns element-wise inverse tangent of the input array.

    The output is in the closed interval :math:[-\pi/2, \pi/2]

    .. math::
    arctan([-1, 0, 1]) = [-\pi/4, 0, \pi/4]

    The storage type of arctan output depends upon the input storage type:

    • arctan(default) = default
    • arctan(row_sparse) = row_sparse
    • arctan(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L144
    returns

    org.apache.mxnet.NDArray

  108. abstract def arctan(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse tangent of the input array.

    The output is in the closed interval :math:[-\pi/2, \pi/2]

    ..

    Returns element-wise inverse tangent of the input array.

    The output is in the closed interval :math:[-\pi/2, \pi/2]

    .. math::
    arctan([-1, 0, 1]) = [-\pi/4, 0, \pi/4]

    The storage type of arctan output depends upon the input storage type:

    • arctan(default) = default
    • arctan(row_sparse) = row_sparse
    • arctan(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L144
    returns

    org.apache.mxnet.NDArray

  109. abstract def arctanh(args: Any*): NDArrayFuncReturn

    Returns the element-wise inverse hyperbolic tangent of the input array, \
    computed element-wise.

    The storage type of arctanh output depends upon the input storage type:

    Returns the element-wise inverse hyperbolic tangent of the input array, \
    computed element-wise.

    The storage type of arctanh output depends upon the input storage type:

    • arctanh(default) = default
    • arctanh(row_sparse) = row_sparse
    • arctanh(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L281
    returns

    org.apache.mxnet.NDArray

  110. abstract def arctanh(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the element-wise inverse hyperbolic tangent of the input array, \
    computed element-wise.

    The storage type of arctanh output depends upon the input storage type:

    Returns the element-wise inverse hyperbolic tangent of the input array, \
    computed element-wise.

    The storage type of arctanh output depends upon the input storage type:

    • arctanh(default) = default
    • arctanh(row_sparse) = row_sparse
    • arctanh(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L281
    returns

    org.apache.mxnet.NDArray

  111. abstract def argmax(args: Any*): NDArrayFuncReturn

    Returns indices of the maximum values along an axis.

    In the case of multiple occurrences of maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmax along axis 0
    argmax(x, axis=0) = [ 1., 1., 1.]

    // argmax along axis 1
    argmax(x, axis=1) = [ 2., 2.]

    // argmax along axis 1 keeping same dims as an input array
    argmax(x, axis=1, keepdims=True) = 2.],
    [ 2.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L52

    Returns indices of the maximum values along an axis.

    In the case of multiple occurrences of maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmax along axis 0
    argmax(x, axis=0) = [ 1., 1., 1.]

    // argmax along axis 1
    argmax(x, axis=1) = [ 2., 2.]

    // argmax along axis 1 keeping same dims as an input array
    argmax(x, axis=1, keepdims=True) = 2.],
    [ 2.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L52

    returns

    org.apache.mxnet.NDArray

  112. abstract def argmax(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns indices of the maximum values along an axis.

    In the case of multiple occurrences of maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmax along axis 0
    argmax(x, axis=0) = [ 1., 1., 1.]

    // argmax along axis 1
    argmax(x, axis=1) = [ 2., 2.]

    // argmax along axis 1 keeping same dims as an input array
    argmax(x, axis=1, keepdims=True) = 2.],
    [ 2.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L52

    Returns indices of the maximum values along an axis.

    In the case of multiple occurrences of maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmax along axis 0
    argmax(x, axis=0) = [ 1., 1., 1.]

    // argmax along axis 1
    argmax(x, axis=1) = [ 2., 2.]

    // argmax along axis 1 keeping same dims as an input array
    argmax(x, axis=1, keepdims=True) = 2.],
    [ 2.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L52

    returns

    org.apache.mxnet.NDArray

  113. abstract def argmax_channel(args: Any*): NDArrayFuncReturn

    Returns argmax indices of each channel from the input array.

    The result will be an NDArray of shape (num_channel,).

    In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    argmax_channel(x) = [ 2., 2.]



    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L97

    Returns argmax indices of each channel from the input array.

    The result will be an NDArray of shape (num_channel,).

    In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    argmax_channel(x) = [ 2., 2.]



    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L97

    returns

    org.apache.mxnet.NDArray

  114. abstract def argmax_channel(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns argmax indices of each channel from the input array.

    The result will be an NDArray of shape (num_channel,).

    In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    argmax_channel(x) = [ 2., 2.]



    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L97

    Returns argmax indices of each channel from the input array.

    The result will be an NDArray of shape (num_channel,).

    In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    argmax_channel(x) = [ 2., 2.]



    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L97

    returns

    org.apache.mxnet.NDArray

  115. abstract def argmin(args: Any*): NDArrayFuncReturn

    Returns indices of the minimum values along an axis.

    In the case of multiple occurrences of minimum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmin along axis 0
    argmin(x, axis=0) = [ 0., 0., 0.]

    // argmin along axis 1
    argmin(x, axis=1) = [ 0., 0.]

    // argmin along axis 1 keeping same dims as an input array
    argmin(x, axis=1, keepdims=True) = 0.],
    [ 0.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L77

    Returns indices of the minimum values along an axis.

    In the case of multiple occurrences of minimum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmin along axis 0
    argmin(x, axis=0) = [ 0., 0., 0.]

    // argmin along axis 1
    argmin(x, axis=1) = [ 0., 0.]

    // argmin along axis 1 keeping same dims as an input array
    argmin(x, axis=1, keepdims=True) = 0.],
    [ 0.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L77

    returns

    org.apache.mxnet.NDArray

  116. abstract def argmin(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns indices of the minimum values along an axis.

    In the case of multiple occurrences of minimum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmin along axis 0
    argmin(x, axis=0) = [ 0., 0., 0.]

    // argmin along axis 1
    argmin(x, axis=1) = [ 0., 0.]

    // argmin along axis 1 keeping same dims as an input array
    argmin(x, axis=1, keepdims=True) = 0.],
    [ 0.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L77

    Returns indices of the minimum values along an axis.

    In the case of multiple occurrences of minimum values, the indices corresponding to the first occurrence
    are returned.

    Examples::

    x = 0., 1., 2.],
    [ 3., 4., 5.


    // argmin along axis 0
    argmin(x, axis=0) = [ 0., 0., 0.]

    // argmin along axis 1
    argmin(x, axis=1) = [ 0., 0.]

    // argmin along axis 1 keeping same dims as an input array
    argmin(x, axis=1, keepdims=True) = 0.],
    [ 0.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L77

    returns

    org.apache.mxnet.NDArray

  117. abstract def argsort(args: Any*): NDArrayFuncReturn

    Returns the indices that would sort an input array along the given axis.

    This function performs sorting along the given axis and returns an array of indices having same shape
    as an input array that index data in sorted order.

    Examples::

    x = 0.3, 0.2, 0.4],
    [ 0.1, 0.3, 0.2


    // sort along axis -1
    argsort(x) = 1., 0., 2.],
    [ 0., 2., 1.


    // sort along axis 0
    argsort(x, axis=0) = 1., 0., 1.]
    [ 0., 1., 0.


    // flatten and then sort
    argsort(x) = [ 3., 1., 5., 0., 4., 2.]


    Defined in src/operator/tensor/ordering_op.cc:L176

    Returns the indices that would sort an input array along the given axis.

    This function performs sorting along the given axis and returns an array of indices having same shape
    as an input array that index data in sorted order.

    Examples::

    x = 0.3, 0.2, 0.4],
    [ 0.1, 0.3, 0.2


    // sort along axis -1
    argsort(x) = 1., 0., 2.],
    [ 0., 2., 1.


    // sort along axis 0
    argsort(x, axis=0) = 1., 0., 1.]
    [ 0., 1., 0.


    // flatten and then sort
    argsort(x) = [ 3., 1., 5., 0., 4., 2.]


    Defined in src/operator/tensor/ordering_op.cc:L176

    returns

    org.apache.mxnet.NDArray

  118. abstract def argsort(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the indices that would sort an input array along the given axis.

    This function performs sorting along the given axis and returns an array of indices having same shape
    as an input array that index data in sorted order.

    Examples::

    x = 0.3, 0.2, 0.4],
    [ 0.1, 0.3, 0.2


    // sort along axis -1
    argsort(x) = 1., 0., 2.],
    [ 0., 2., 1.


    // sort along axis 0
    argsort(x, axis=0) = 1., 0., 1.]
    [ 0., 1., 0.


    // flatten and then sort
    argsort(x) = [ 3., 1., 5., 0., 4., 2.]


    Defined in src/operator/tensor/ordering_op.cc:L176

    Returns the indices that would sort an input array along the given axis.

    This function performs sorting along the given axis and returns an array of indices having same shape
    as an input array that index data in sorted order.

    Examples::

    x = 0.3, 0.2, 0.4],
    [ 0.1, 0.3, 0.2


    // sort along axis -1
    argsort(x) = 1., 0., 2.],
    [ 0., 2., 1.


    // sort along axis 0
    argsort(x, axis=0) = 1., 0., 1.]
    [ 0., 1., 0.


    // flatten and then sort
    argsort(x) = [ 3., 1., 5., 0., 4., 2.]


    Defined in src/operator/tensor/ordering_op.cc:L176

    returns

    org.apache.mxnet.NDArray

  119. abstract def batch_dot(args: Any*): NDArrayFuncReturn

    Batchwise dot product.

    batch_dot is used to compute dot product of x and y when x and
    y are data in batch, namely 3D arrays in shape of (batch_size, :, :).

    For example, given x with shape (batch_size, n, m) and y with shape
    (batch_size, m, k), the result array will have shape (batch_size, n, k),
    which is computed by::

    batch_dot(x,y)[i,:,:] = dot(x[i,:,:], y[i,:,:])



    Defined in src/operator/tensor/dot.cc:L125

    Batchwise dot product.

    batch_dot is used to compute dot product of x and y when x and
    y are data in batch, namely 3D arrays in shape of (batch_size, :, :).

    For example, given x with shape (batch_size, n, m) and y with shape
    (batch_size, m, k), the result array will have shape (batch_size, n, k),
    which is computed by::

    batch_dot(x,y)[i,:,:] = dot(x[i,:,:], y[i,:,:])



    Defined in src/operator/tensor/dot.cc:L125

    returns

    org.apache.mxnet.NDArray

  120. abstract def batch_dot(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Batchwise dot product.

    batch_dot is used to compute dot product of x and y when x and
    y are data in batch, namely 3D arrays in shape of (batch_size, :, :).

    For example, given x with shape (batch_size, n, m) and y with shape
    (batch_size, m, k), the result array will have shape (batch_size, n, k),
    which is computed by::

    batch_dot(x,y)[i,:,:] = dot(x[i,:,:], y[i,:,:])



    Defined in src/operator/tensor/dot.cc:L125

    Batchwise dot product.

    batch_dot is used to compute dot product of x and y when x and
    y are data in batch, namely 3D arrays in shape of (batch_size, :, :).

    For example, given x with shape (batch_size, n, m) and y with shape
    (batch_size, m, k), the result array will have shape (batch_size, n, k),
    which is computed by::

    batch_dot(x,y)[i,:,:] = dot(x[i,:,:], y[i,:,:])



    Defined in src/operator/tensor/dot.cc:L125

    returns

    org.apache.mxnet.NDArray

  121. abstract def batch_take(args: Any*): NDArrayFuncReturn

    Takes elements from a data batch.

    ..

    Takes elements from a data batch.

    .. note::
    batch_take is deprecated. Use pick instead.

    Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be
    an output array of shape (i0,) with::

    output[i] = input[i, indices[i]]

    Examples::

    x = 1., 2.],
    [ 3., 4.],
    [ 5., 6.


    // takes elements with specified indices
    batch_take(x, [0,1,0]) = [ 1. 4. 5.]



    Defined in src/operator/tensor/indexing_op.cc:L462

    returns

    org.apache.mxnet.NDArray

  122. abstract def batch_take(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Takes elements from a data batch.

    ..

    Takes elements from a data batch.

    .. note::
    batch_take is deprecated. Use pick instead.

    Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be
    an output array of shape (i0,) with::

    output[i] = input[i, indices[i]]

    Examples::

    x = 1., 2.],
    [ 3., 4.],
    [ 5., 6.


    // takes elements with specified indices
    batch_take(x, [0,1,0]) = [ 1. 4. 5.]



    Defined in src/operator/tensor/indexing_op.cc:L462

    returns

    org.apache.mxnet.NDArray

  123. abstract def broadcast_add(args: Any*): NDArrayFuncReturn

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    returns

    org.apache.mxnet.NDArray

  124. abstract def broadcast_add(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    returns

    org.apache.mxnet.NDArray

  125. abstract def broadcast_axes(args: Any*): NDArrayFuncReturn

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    Example::

    // given x of shape (1,2,1)
    x = 1.],
    [ 2.


    // broadcast x on on axis 2
    broadcast_axis(x, axis=2, size=3) = 1., 1., 1.],
    [ 2., 2., 2.

    // broadcast x on on axes 0 and 2
    broadcast_axis(x, axis=(0,2), size=(2,3)) = 1., 1., 1.],
    [ 2., 2., 2.]],
    1., 1., 1.],
    [ 2., 2., 2.
    ]


    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L237

    returns

    org.apache.mxnet.NDArray

  126. abstract def broadcast_axes(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    Example::

    // given x of shape (1,2,1)
    x = 1.],
    [ 2.


    // broadcast x on on axis 2
    broadcast_axis(x, axis=2, size=3) = 1., 1., 1.],
    [ 2., 2., 2.

    // broadcast x on on axes 0 and 2
    broadcast_axis(x, axis=(0,2), size=(2,3)) = 1., 1., 1.],
    [ 2., 2., 2.]],
    1., 1., 1.],
    [ 2., 2., 2.
    ]


    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L237

    returns

    org.apache.mxnet.NDArray

  127. abstract def broadcast_axis(args: Any*): NDArrayFuncReturn

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    Example::

    // given x of shape (1,2,1)
    x = 1.],
    [ 2.


    // broadcast x on on axis 2
    broadcast_axis(x, axis=2, size=3) = 1., 1., 1.],
    [ 2., 2., 2.

    // broadcast x on on axes 0 and 2
    broadcast_axis(x, axis=(0,2), size=(2,3)) = 1., 1., 1.],
    [ 2., 2., 2.]],
    1., 1., 1.],
    [ 2., 2., 2.
    ]


    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L237

    returns

    org.apache.mxnet.NDArray

  128. abstract def broadcast_axis(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts the input array over particular axes.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    Example::

    // given x of shape (1,2,1)
    x = 1.],
    [ 2.


    // broadcast x on on axis 2
    broadcast_axis(x, axis=2, size=3) = 1., 1., 1.],
    [ 2., 2., 2.

    // broadcast x on on axes 0 and 2
    broadcast_axis(x, axis=(0,2), size=(2,3)) = 1., 1., 1.],
    [ 2., 2., 2.]],
    1., 1., 1.],
    [ 2., 2., 2.
    ]


    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L237

    returns

    org.apache.mxnet.NDArray

  129. abstract def broadcast_div(args: Any*): NDArrayFuncReturn

    Returns element-wise division of the input arrays with broadcasting.

    Example::

    x = 6., 6., 6.],
    [ 6., 6., 6.


    y = 2.],
    [ 3.


    broadcast_div(x, y) = 3., 3., 3.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_div(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L187

    Returns element-wise division of the input arrays with broadcasting.

    Example::

    x = 6., 6., 6.],
    [ 6., 6., 6.


    y = 2.],
    [ 3.


    broadcast_div(x, y) = 3., 3., 3.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_div(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L187

    returns

    org.apache.mxnet.NDArray

  130. abstract def broadcast_div(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise division of the input arrays with broadcasting.

    Example::

    x = 6., 6., 6.],
    [ 6., 6., 6.


    y = 2.],
    [ 3.


    broadcast_div(x, y) = 3., 3., 3.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_div(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L187

    Returns element-wise division of the input arrays with broadcasting.

    Example::

    x = 6., 6., 6.],
    [ 6., 6., 6.


    y = 2.],
    [ 3.


    broadcast_div(x, y) = 3., 3., 3.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_div(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L187

    returns

    org.apache.mxnet.NDArray

  131. abstract def broadcast_equal(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **equal to** (==) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L46

    Returns the result of element-wise **equal to** (==) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L46

    returns

    org.apache.mxnet.NDArray

  132. abstract def broadcast_equal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **equal to** (==) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L46

    Returns the result of element-wise **equal to** (==) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L46

    returns

    org.apache.mxnet.NDArray

  133. abstract def broadcast_greater(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **greater than** (>) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L82

    Returns the result of element-wise **greater than** (>) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L82

    returns

    org.apache.mxnet.NDArray

  134. abstract def broadcast_greater(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **greater than** (>) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L82

    Returns the result of element-wise **greater than** (>) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L82

    returns

    org.apache.mxnet.NDArray

  135. abstract def broadcast_greater_equal(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **greater than or equal to** (>=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater_equal(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L100

    Returns the result of element-wise **greater than or equal to** (>=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater_equal(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L100

    returns

    org.apache.mxnet.NDArray

  136. abstract def broadcast_greater_equal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **greater than or equal to** (>=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater_equal(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L100

    Returns the result of element-wise **greater than or equal to** (>=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_greater_equal(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L100

    returns

    org.apache.mxnet.NDArray

  137. abstract def broadcast_hypot(args: Any*): NDArrayFuncReturn

    Returns the hypotenuse of a right angled triangle, given its "legs"
    with broadcasting.

    It is equivalent to doing :math:sqrt(x_12 + x_22).

    Example::

    x = 3., 3., 3.

    y = 4.],
    [ 4.


    broadcast_hypot(x, y) = 5., 5., 5.],
    [ 5., 5., 5.


    z = 0.],
    [ 4.


    broadcast_hypot(x, z) = 3., 3., 3.],
    [ 5., 5., 5.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L156

    Returns the hypotenuse of a right angled triangle, given its "legs"
    with broadcasting.

    It is equivalent to doing :math:sqrt(x_12 + x_22).

    Example::

    x = 3., 3., 3.

    y = 4.],
    [ 4.


    broadcast_hypot(x, y) = 5., 5., 5.],
    [ 5., 5., 5.


    z = 0.],
    [ 4.


    broadcast_hypot(x, z) = 3., 3., 3.],
    [ 5., 5., 5.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L156

    returns

    org.apache.mxnet.NDArray

  138. abstract def broadcast_hypot(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the hypotenuse of a right angled triangle, given its "legs"
    with broadcasting.

    It is equivalent to doing :math:sqrt(x_12 + x_22).

    Example::

    x = 3., 3., 3.

    y = 4.],
    [ 4.


    broadcast_hypot(x, y) = 5., 5., 5.],
    [ 5., 5., 5.


    z = 0.],
    [ 4.


    broadcast_hypot(x, z) = 3., 3., 3.],
    [ 5., 5., 5.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L156

    Returns the hypotenuse of a right angled triangle, given its "legs"
    with broadcasting.

    It is equivalent to doing :math:sqrt(x_12 + x_22).

    Example::

    x = 3., 3., 3.

    y = 4.],
    [ 4.


    broadcast_hypot(x, y) = 5., 5., 5.],
    [ 5., 5., 5.


    z = 0.],
    [ 4.


    broadcast_hypot(x, z) = 3., 3., 3.],
    [ 5., 5., 5.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L156

    returns

    org.apache.mxnet.NDArray

  139. abstract def broadcast_lesser(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **lesser than** (<) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser(x, y) = 0., 0., 0.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L118

    Returns the result of element-wise **lesser than** (<) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser(x, y) = 0., 0., 0.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L118

    returns

    org.apache.mxnet.NDArray

  140. abstract def broadcast_lesser(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **lesser than** (<) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser(x, y) = 0., 0., 0.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L118

    Returns the result of element-wise **lesser than** (<) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser(x, y) = 0., 0., 0.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L118

    returns

    org.apache.mxnet.NDArray

  141. abstract def broadcast_lesser_equal(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **lesser than or equal to** (<=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L136

    Returns the result of element-wise **lesser than or equal to** (<=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L136

    returns

    org.apache.mxnet.NDArray

  142. abstract def broadcast_lesser_equal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **lesser than or equal to** (<=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L136

    Returns the result of element-wise **lesser than or equal to** (<=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_lesser_equal(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L136

    returns

    org.apache.mxnet.NDArray

  143. abstract def broadcast_like(args: Any*): NDArrayFuncReturn

    Broadcasts lhs to have the same shape as rhs.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts lhs to have the same shape as rhs.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    For example::

    broadcast_like(1,2,3, 5,6,7],[7,8,9) = 1., 2., 3.],
    [ 1., 2., 3.
    )



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L312

    returns

    org.apache.mxnet.NDArray

  144. abstract def broadcast_like(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Broadcasts lhs to have the same shape as rhs.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts lhs to have the same shape as rhs.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    For example::

    broadcast_like(1,2,3, 5,6,7],[7,8,9) = 1., 2., 3.],
    [ 1., 2., 3.
    )



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L312

    returns

    org.apache.mxnet.NDArray

  145. abstract def broadcast_logical_and(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **logical and** with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_logical_and(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L154

    Returns the result of element-wise **logical and** with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_logical_and(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L154

    returns

    org.apache.mxnet.NDArray

  146. abstract def broadcast_logical_and(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **logical and** with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_logical_and(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L154

    Returns the result of element-wise **logical and** with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_logical_and(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L154

    returns

    org.apache.mxnet.NDArray

  147. abstract def broadcast_logical_or(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **logical or** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_or(x, y) = 1., 1., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L172

    Returns the result of element-wise **logical or** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_or(x, y) = 1., 1., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L172

    returns

    org.apache.mxnet.NDArray

  148. abstract def broadcast_logical_or(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **logical or** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_or(x, y) = 1., 1., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L172

    Returns the result of element-wise **logical or** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_or(x, y) = 1., 1., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L172

    returns

    org.apache.mxnet.NDArray

  149. abstract def broadcast_logical_xor(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **logical xor** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_xor(x, y) = 0., 0., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L190

    Returns the result of element-wise **logical xor** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_xor(x, y) = 0., 0., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L190

    returns

    org.apache.mxnet.NDArray

  150. abstract def broadcast_logical_xor(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **logical xor** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_xor(x, y) = 0., 0., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L190

    Returns the result of element-wise **logical xor** with broadcasting.

    Example::

    x = 1., 1., 0.],
    [ 1., 1., 0.


    y = 1.],
    [ 0.


    broadcast_logical_xor(x, y) = 0., 0., 1.],
    [ 1., 1., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L190

    returns

    org.apache.mxnet.NDArray

  151. abstract def broadcast_maximum(args: Any*): NDArrayFuncReturn

    Returns element-wise maximum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise maxima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L80

    Returns element-wise maximum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise maxima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L80

    returns

    org.apache.mxnet.NDArray

  152. abstract def broadcast_maximum(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise maximum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise maxima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L80

    Returns element-wise maximum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise maxima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 1., 1., 1.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L80

    returns

    org.apache.mxnet.NDArray

  153. abstract def broadcast_minimum(args: Any*): NDArrayFuncReturn

    Returns element-wise minimum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise minima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L115

    Returns element-wise minimum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise minima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L115

    returns

    org.apache.mxnet.NDArray

  154. abstract def broadcast_minimum(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise minimum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise minima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L115

    Returns element-wise minimum of the input arrays with broadcasting.

    This function compares two input arrays and returns a new array having the element-wise minima.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_maximum(x, y) = 0., 0., 0.],
    [ 1., 1., 1.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L115

    returns

    org.apache.mxnet.NDArray

  155. abstract def broadcast_minus(args: Any*): NDArrayFuncReturn

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    returns

    org.apache.mxnet.NDArray

  156. abstract def broadcast_minus(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    returns

    org.apache.mxnet.NDArray

  157. abstract def broadcast_mod(args: Any*): NDArrayFuncReturn

    Returns element-wise modulo of the input arrays with broadcasting.

    Example::

    x = 8., 8., 8.],
    [ 8., 8., 8.


    y = 2.],
    [ 3.


    broadcast_mod(x, y) = 0., 0., 0.],
    [ 2., 2., 2.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L222

    Returns element-wise modulo of the input arrays with broadcasting.

    Example::

    x = 8., 8., 8.],
    [ 8., 8., 8.


    y = 2.],
    [ 3.


    broadcast_mod(x, y) = 0., 0., 0.],
    [ 2., 2., 2.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L222

    returns

    org.apache.mxnet.NDArray

  158. abstract def broadcast_mod(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise modulo of the input arrays with broadcasting.

    Example::

    x = 8., 8., 8.],
    [ 8., 8., 8.


    y = 2.],
    [ 3.


    broadcast_mod(x, y) = 0., 0., 0.],
    [ 2., 2., 2.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L222

    Returns element-wise modulo of the input arrays with broadcasting.

    Example::

    x = 8., 8., 8.],
    [ 8., 8., 8.


    y = 2.],
    [ 3.


    broadcast_mod(x, y) = 0., 0., 0.],
    [ 2., 2., 2.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L222

    returns

    org.apache.mxnet.NDArray

  159. abstract def broadcast_mul(args: Any*): NDArrayFuncReturn

    Returns element-wise product of the input arrays with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_mul(x, y) = 0., 0., 0.],
    [ 1., 1., 1.


    Supported sparse operations:

    broadcast_mul(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L146

    Returns element-wise product of the input arrays with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_mul(x, y) = 0., 0., 0.],
    [ 1., 1., 1.


    Supported sparse operations:

    broadcast_mul(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L146

    returns

    org.apache.mxnet.NDArray

  160. abstract def broadcast_mul(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise product of the input arrays with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_mul(x, y) = 0., 0., 0.],
    [ 1., 1., 1.


    Supported sparse operations:

    broadcast_mul(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L146

    Returns element-wise product of the input arrays with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_mul(x, y) = 0., 0., 0.],
    [ 1., 1., 1.


    Supported sparse operations:

    broadcast_mul(csr, dense(1D)) = csr



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L146

    returns

    org.apache.mxnet.NDArray

  161. abstract def broadcast_not_equal(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **not equal to** (!=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_not_equal(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L64

    Returns the result of element-wise **not equal to** (!=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_not_equal(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L64

    returns

    org.apache.mxnet.NDArray

  162. abstract def broadcast_not_equal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of element-wise **not equal to** (!=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_not_equal(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L64

    Returns the result of element-wise **not equal to** (!=) comparison operation with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_not_equal(x, y) = 1., 1., 1.],
    [ 0., 0., 0.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_logic.cc:L64

    returns

    org.apache.mxnet.NDArray

  163. abstract def broadcast_plus(args: Any*): NDArrayFuncReturn

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    returns

    org.apache.mxnet.NDArray

  164. abstract def broadcast_plus(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    Returns element-wise sum of the input arrays with broadcasting.

    broadcast_plus is an alias to the function broadcast_add.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_add(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    broadcast_plus(x, y) = 1., 1., 1.],
    [ 2., 2., 2.


    Supported sparse operations:

    broadcast_add(csr, dense(1D)) = dense
    broadcast_add(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L58

    returns

    org.apache.mxnet.NDArray

  165. abstract def broadcast_power(args: Any*): NDArrayFuncReturn

    Returns result of first array elements raised to powers from second array, element-wise with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_power(x, y) = 2., 2., 2.],
    [ 4., 4., 4.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L45

    Returns result of first array elements raised to powers from second array, element-wise with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_power(x, y) = 2., 2., 2.],
    [ 4., 4., 4.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L45

    returns

    org.apache.mxnet.NDArray

  166. abstract def broadcast_power(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns result of first array elements raised to powers from second array, element-wise with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_power(x, y) = 2., 2., 2.],
    [ 4., 4., 4.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L45

    Returns result of first array elements raised to powers from second array, element-wise with broadcasting.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_power(x, y) = 2., 2., 2.],
    [ 4., 4., 4.




    Defined in src/operator/tensor/elemwise_binary_broadcast_op_extended.cc:L45

    returns

    org.apache.mxnet.NDArray

  167. abstract def broadcast_sub(args: Any*): NDArrayFuncReturn

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    returns

    org.apache.mxnet.NDArray

  168. abstract def broadcast_sub(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    Returns element-wise difference of the input arrays with broadcasting.

    broadcast_minus is an alias to the function broadcast_sub.

    Example::

    x = 1., 1., 1.],
    [ 1., 1., 1.


    y = 0.],
    [ 1.


    broadcast_sub(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    broadcast_minus(x, y) = 1., 1., 1.],
    [ 0., 0., 0.


    Supported sparse operations:

    broadcast_sub/minus(csr, dense(1D)) = dense
    broadcast_sub/minus(dense(1D), csr) = dense



    Defined in src/operator/tensor/elemwise_binary_broadcast_op_basic.cc:L106

    returns

    org.apache.mxnet.NDArray

  169. abstract def broadcast_to(args: Any*): NDArrayFuncReturn

    Broadcasts the input array to a new shape.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts the input array to a new shape.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    For example::

    broadcast_to(1,2,3, shape=(2,3)) = 1., 2., 3.],
    [ 1., 2., 3.
    )

    The dimension which you do not want to change can also be kept as 0 which means copy the original value.
    So with shape=(2,0), we will obtain the same result as in the above example.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L261

    returns

    org.apache.mxnet.NDArray

  170. abstract def broadcast_to(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Broadcasts the input array to a new shape.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9).

    Broadcasts the input array to a new shape.

    Broadcasting is a mechanism that allows NDArrays to perform arithmetic operations
    with arrays of different shapes efficiently without creating multiple copies of arrays.
    Also see, Broadcasting <https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>_ for more explanation.

    Broadcasting is allowed on axes with size 1, such as from (2,1,3,1) to
    (2,8,3,9). Elements will be duplicated on the broadcasted axes.

    For example::

    broadcast_to(1,2,3, shape=(2,3)) = 1., 2., 3.],
    [ 1., 2., 3.
    )

    The dimension which you do not want to change can also be kept as 0 which means copy the original value.
    So with shape=(2,0), we will obtain the same result as in the above example.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L261

    returns

    org.apache.mxnet.NDArray

  171. abstract def cast(args: Any*): NDArrayFuncReturn

    Casts all elements of the input to a new type.

    ..

    Casts all elements of the input to a new type.

    .. note:: Cast is deprecated. Use cast instead.

    Example::

    cast([0.9, 1.3], dtype='int32') = [0, 1]
    cast([1e20, 11.1], dtype='float16') = [inf, 11.09375]
    cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L504

    returns

    org.apache.mxnet.NDArray

  172. abstract def cast(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Casts all elements of the input to a new type.

    ..

    Casts all elements of the input to a new type.

    .. note:: Cast is deprecated. Use cast instead.

    Example::

    cast([0.9, 1.3], dtype='int32') = [0, 1]
    cast([1e20, 11.1], dtype='float16') = [inf, 11.09375]
    cast([300, 11.1, 10.9, -1, -3], dtype='uint8') = [44, 11, 10, 255, 253]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L504

    returns

    org.apache.mxnet.NDArray

  173. abstract def cast_storage(args: Any*): NDArrayFuncReturn

    Casts tensor storage type to the new type.

    When an NDArray with default storage type is cast to csr or row_sparse storage,
    the result is compact, which means:

    - for csr, zero values will not be retained
    - for row_sparse, row slices of all zeros will not be retained

    The storage type of cast_storage output depends on stype parameter:

    - cast_storage(csr, 'default') = default
    - cast_storage(row_sparse, 'default') = default
    - cast_storage(default, 'csr') = csr
    - cast_storage(default, 'row_sparse') = row_sparse
    - cast_storage(csr, 'csr') = csr
    - cast_storage(row_sparse, 'row_sparse') = row_sparse

    Example::

    dense = 0., 1., 0.],
    [ 2., 0., 3.],
    [ 0., 0., 0.],
    [ 0., 0., 0.


    # cast to row_sparse storage type
    rsp = cast_storage(dense, 'row_sparse')
    rsp.indices = [0, 1]
    rsp.values = 0., 1., 0.],
    [ 2., 0., 3.


    # cast to csr storage type
    csr = cast_storage(dense, 'csr')
    csr.indices = [1, 0, 2]
    csr.values = [ 1., 2., 3.]
    csr.indptr = [0, 1, 3, 3, 3]



    Defined in src/operator/tensor/cast_storage.cc:L71

    Casts tensor storage type to the new type.

    When an NDArray with default storage type is cast to csr or row_sparse storage,
    the result is compact, which means:

    - for csr, zero values will not be retained
    - for row_sparse, row slices of all zeros will not be retained

    The storage type of cast_storage output depends on stype parameter:

    - cast_storage(csr, 'default') = default
    - cast_storage(row_sparse, 'default') = default
    - cast_storage(default, 'csr') = csr
    - cast_storage(default, 'row_sparse') = row_sparse
    - cast_storage(csr, 'csr') = csr
    - cast_storage(row_sparse, 'row_sparse') = row_sparse

    Example::

    dense = 0., 1., 0.],
    [ 2., 0., 3.],
    [ 0., 0., 0.],
    [ 0., 0., 0.


    # cast to row_sparse storage type
    rsp = cast_storage(dense, 'row_sparse')
    rsp.indices = [0, 1]
    rsp.values = 0., 1., 0.],
    [ 2., 0., 3.


    # cast to csr storage type
    csr = cast_storage(dense, 'csr')
    csr.indices = [1, 0, 2]
    csr.values = [ 1., 2., 3.]
    csr.indptr = [0, 1, 3, 3, 3]



    Defined in src/operator/tensor/cast_storage.cc:L71

    returns

    org.apache.mxnet.NDArray

  174. abstract def cast_storage(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Casts tensor storage type to the new type.

    When an NDArray with default storage type is cast to csr or row_sparse storage,
    the result is compact, which means:

    - for csr, zero values will not be retained
    - for row_sparse, row slices of all zeros will not be retained

    The storage type of cast_storage output depends on stype parameter:

    - cast_storage(csr, 'default') = default
    - cast_storage(row_sparse, 'default') = default
    - cast_storage(default, 'csr') = csr
    - cast_storage(default, 'row_sparse') = row_sparse
    - cast_storage(csr, 'csr') = csr
    - cast_storage(row_sparse, 'row_sparse') = row_sparse

    Example::

    dense = 0., 1., 0.],
    [ 2., 0., 3.],
    [ 0., 0., 0.],
    [ 0., 0., 0.


    # cast to row_sparse storage type
    rsp = cast_storage(dense, 'row_sparse')
    rsp.indices = [0, 1]
    rsp.values = 0., 1., 0.],
    [ 2., 0., 3.


    # cast to csr storage type
    csr = cast_storage(dense, 'csr')
    csr.indices = [1, 0, 2]
    csr.values = [ 1., 2., 3.]
    csr.indptr = [0, 1, 3, 3, 3]



    Defined in src/operator/tensor/cast_storage.cc:L71

    Casts tensor storage type to the new type.

    When an NDArray with default storage type is cast to csr or row_sparse storage,
    the result is compact, which means:

    - for csr, zero values will not be retained
    - for row_sparse, row slices of all zeros will not be retained

    The storage type of cast_storage output depends on stype parameter:

    - cast_storage(csr, 'default') = default
    - cast_storage(row_sparse, 'default') = default
    - cast_storage(default, 'csr') = csr
    - cast_storage(default, 'row_sparse') = row_sparse
    - cast_storage(csr, 'csr') = csr
    - cast_storage(row_sparse, 'row_sparse') = row_sparse

    Example::

    dense = 0., 1., 0.],
    [ 2., 0., 3.],
    [ 0., 0., 0.],
    [ 0., 0., 0.


    # cast to row_sparse storage type
    rsp = cast_storage(dense, 'row_sparse')
    rsp.indices = [0, 1]
    rsp.values = 0., 1., 0.],
    [ 2., 0., 3.


    # cast to csr storage type
    csr = cast_storage(dense, 'csr')
    csr.indices = [1, 0, 2]
    csr.values = [ 1., 2., 3.]
    csr.indptr = [0, 1, 3, 3, 3]



    Defined in src/operator/tensor/cast_storage.cc:L71

    returns

    org.apache.mxnet.NDArray

  175. abstract def cbrt(args: Any*): NDArrayFuncReturn

    Returns element-wise cube-root value of the input.

    ..

    Returns element-wise cube-root value of the input.

    .. math::
    cbrt(x) = \sqrt[3]{x}

    Example::

    cbrt([1, 8, -125]) = [1, 2, -5]

    The storage type of cbrt output depends upon the input storage type:

    • cbrt(default) = default
    • cbrt(row_sparse) = row_sparse
    • cbrt(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L799
    returns

    org.apache.mxnet.NDArray

  176. abstract def cbrt(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise cube-root value of the input.

    ..

    Returns element-wise cube-root value of the input.

    .. math::
    cbrt(x) = \sqrt[3]{x}

    Example::

    cbrt([1, 8, -125]) = [1, 2, -5]

    The storage type of cbrt output depends upon the input storage type:

    • cbrt(default) = default
    • cbrt(row_sparse) = row_sparse
    • cbrt(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L799
    returns

    org.apache.mxnet.NDArray

  177. abstract def ceil(args: Any*): NDArrayFuncReturn

    Returns element-wise ceiling of the input.

    The ceil of the scalar x is the smallest integer i, such that i >= x.

    Example::

    ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 2., 2., 3.]

    The storage type of ceil output depends upon the input storage type:

    Returns element-wise ceiling of the input.

    The ceil of the scalar x is the smallest integer i, such that i >= x.

    Example::

    ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 2., 2., 3.]

    The storage type of ceil output depends upon the input storage type:

    • ceil(default) = default
    • ceil(row_sparse) = row_sparse
    • ceil(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L656
    returns

    org.apache.mxnet.NDArray

  178. abstract def ceil(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise ceiling of the input.

    The ceil of the scalar x is the smallest integer i, such that i >= x.

    Example::

    ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 2., 2., 3.]

    The storage type of ceil output depends upon the input storage type:

    Returns element-wise ceiling of the input.

    The ceil of the scalar x is the smallest integer i, such that i >= x.

    Example::

    ceil([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-2., -1., 2., 2., 3.]

    The storage type of ceil output depends upon the input storage type:

    • ceil(default) = default
    • ceil(row_sparse) = row_sparse
    • ceil(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L656
    returns

    org.apache.mxnet.NDArray

  179. abstract def choose_element_0index(args: Any*): NDArrayFuncReturn

    Choose one element from each line(row for python, column for R/Julia) in lhs according to index indicated by rhs.

    Choose one element from each line(row for python, column for R/Julia) in lhs according to index indicated by rhs. This function assume rhs uses 0-based index.

    returns

    org.apache.mxnet.NDArray

  180. abstract def choose_element_0index(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Choose one element from each line(row for python, column for R/Julia) in lhs according to index indicated by rhs.

    Choose one element from each line(row for python, column for R/Julia) in lhs according to index indicated by rhs. This function assume rhs uses 0-based index.

    returns

    org.apache.mxnet.NDArray

  181. abstract def clip(args: Any*): NDArrayFuncReturn

    Clips (limits) the values in an array.

    Given an interval, values outside the interval are clipped to the interval edges.
    Clipping x between a_min and a_x would be::

    clip(x, a_min, a_max) = max(min(x, a_max), a_min))

    Example::

    x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

    clip(x,1,8) = [ 1., 1., 2., 3., 4., 5., 6., 7., 8., 8.]

    The storage type of clip output depends on storage types of inputs and the a_min, a_max \
    parameter values:

    Clips (limits) the values in an array.

    Given an interval, values outside the interval are clipped to the interval edges.
    Clipping x between a_min and a_x would be::

    clip(x, a_min, a_max) = max(min(x, a_max), a_min))

    Example::

    x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

    clip(x,1,8) = [ 1., 1., 2., 3., 4., 5., 6., 7., 8., 8.]

    The storage type of clip output depends on storage types of inputs and the a_min, a_max \
    parameter values:

    • clip(default) = default
    • clip(row_sparse, a_min <= 0, a_max >= 0) = row_sparse
    • clip(csr, a_min <= 0, a_max >= 0) = csr
    • clip(row_sparse, a_min < 0, a_max < 0) = default
    • clip(row_sparse, a_min > 0, a_max > 0) = default
    • clip(csr, a_min < 0, a_max < 0) = csr
    • clip(csr, a_min > 0, a_max > 0) = csr



      Defined in src/operator/tensor/matrix_op.cc:L617
    returns

    org.apache.mxnet.NDArray

  182. abstract def clip(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Clips (limits) the values in an array.

    Given an interval, values outside the interval are clipped to the interval edges.
    Clipping x between a_min and a_x would be::

    clip(x, a_min, a_max) = max(min(x, a_max), a_min))

    Example::

    x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

    clip(x,1,8) = [ 1., 1., 2., 3., 4., 5., 6., 7., 8., 8.]

    The storage type of clip output depends on storage types of inputs and the a_min, a_max \
    parameter values:

    Clips (limits) the values in an array.

    Given an interval, values outside the interval are clipped to the interval edges.
    Clipping x between a_min and a_x would be::

    clip(x, a_min, a_max) = max(min(x, a_max), a_min))

    Example::

    x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

    clip(x,1,8) = [ 1., 1., 2., 3., 4., 5., 6., 7., 8., 8.]

    The storage type of clip output depends on storage types of inputs and the a_min, a_max \
    parameter values:

    • clip(default) = default
    • clip(row_sparse, a_min <= 0, a_max >= 0) = row_sparse
    • clip(csr, a_min <= 0, a_max >= 0) = csr
    • clip(row_sparse, a_min < 0, a_max < 0) = default
    • clip(row_sparse, a_min > 0, a_max > 0) = default
    • clip(csr, a_min < 0, a_max < 0) = csr
    • clip(csr, a_min > 0, a_max > 0) = csr



      Defined in src/operator/tensor/matrix_op.cc:L617
    returns

    org.apache.mxnet.NDArray

  183. abstract def concat(args: Any*): NDArrayFuncReturn

    Joins input arrays along a given axis.

    ..

    Joins input arrays along a given axis.

    .. note:: Concat is deprecated. Use concat instead.

    The dimensions of the input arrays should be the same except the axis along
    which they will be concatenated.
    The dimension of the output array along the concatenated axis will be equal
    to the sum of the corresponding dimensions of the input arrays.

    The storage type of concat output depends on storage types of inputs

    - concat(csr, csr, ..., csr, dim=0) = csr
    - otherwise, concat generates output with default storage

    Example::

    x = 1,1],[2,2
    y = 3,3],[4,4],[5,5
    z = [7,7],[8,8

    concat(x,y,z,dim=0) = 1., 1.],
    [ 2., 2.],
    [ 3., 3.],
    [ 4., 4.],
    [ 5., 5.],
    [ 6., 6.],
    [ 7., 7.],
    [ 8., 8.


    Note that you cannot concat x,y,z along dimension 1 since dimension
    0 is not the same for all the input arrays.

    concat(y,z,dim=1) = 3., 3., 6., 6.],
    [ 4., 4., 7., 7.],
    [ 5., 5., 8., 8.




    Defined in src/operator/nn/concat.cc:L270

    returns

    org.apache.mxnet.NDArray

  184. abstract def concat(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Joins input arrays along a given axis.

    ..

    Joins input arrays along a given axis.

    .. note:: Concat is deprecated. Use concat instead.

    The dimensions of the input arrays should be the same except the axis along
    which they will be concatenated.
    The dimension of the output array along the concatenated axis will be equal
    to the sum of the corresponding dimensions of the input arrays.

    The storage type of concat output depends on storage types of inputs

    - concat(csr, csr, ..., csr, dim=0) = csr
    - otherwise, concat generates output with default storage

    Example::

    x = 1,1],[2,2
    y = 3,3],[4,4],[5,5
    z = [7,7],[8,8

    concat(x,y,z,dim=0) = 1., 1.],
    [ 2., 2.],
    [ 3., 3.],
    [ 4., 4.],
    [ 5., 5.],
    [ 6., 6.],
    [ 7., 7.],
    [ 8., 8.


    Note that you cannot concat x,y,z along dimension 1 since dimension
    0 is not the same for all the input arrays.

    concat(y,z,dim=1) = 3., 3., 6., 6.],
    [ 4., 4., 7., 7.],
    [ 5., 5., 8., 8.




    Defined in src/operator/nn/concat.cc:L270

    returns

    org.apache.mxnet.NDArray

  185. abstract def cos(args: Any*): NDArrayFuncReturn

    Computes the element-wise cosine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    ..

    Computes the element-wise cosine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    .. math::
    cos([0, \pi/4, \pi/2]) = [1, 0.707, 0]

    The storage type of cos output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L63

    returns

    org.apache.mxnet.NDArray

  186. abstract def cos(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the element-wise cosine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    ..

    Computes the element-wise cosine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    .. math::
    cos([0, \pi/4, \pi/2]) = [1, 0.707, 0]

    The storage type of cos output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L63

    returns

    org.apache.mxnet.NDArray

  187. abstract def cosh(args: Any*): NDArrayFuncReturn

    Returns the hyperbolic cosine of the input array, computed element-wise.

    ..

    Returns the hyperbolic cosine of the input array, computed element-wise.

    .. math::
    cosh(x) = 0.5\times(exp(x) + exp(-x))

    The storage type of cosh output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L216

    returns

    org.apache.mxnet.NDArray

  188. abstract def cosh(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the hyperbolic cosine of the input array, computed element-wise.

    ..

    Returns the hyperbolic cosine of the input array, computed element-wise.

    .. math::
    cosh(x) = 0.5\times(exp(x) + exp(-x))

    The storage type of cosh output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L216

    returns

    org.apache.mxnet.NDArray

  189. abstract def crop(args: Any*): NDArrayFuncReturn

    Slices a region of the array.

    ..

    Slices a region of the array.

    .. note:: crop is deprecated. Use slice instead.

    This function returns a sliced array between the indices given
    by begin and end with the corresponding step.

    For an input array of shape=(d_0, d_1, ..., d_n-1),
    slice operation with begin=(b_0, b_1...b_m-1),
    end=(e_0, e_1, ..., e_m-1), and step=(s_0, s_1, ..., s_m-1),
    where m <= n, results in an array with the shape
    (|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1).

    The resulting array's *k*-th dimension contains elements
    from the *k*-th dimension of the input array starting
    from index b_k (inclusive) with step s_k
    until reaching e_k (exclusive).

    If the *k*-th elements are None in the sequence of begin, end,
    and step, the following rule will be used to set default values.
    If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k;
    else, set b_k=d_k-1, e_k=-1.

    The storage type of slice output depends on storage types of inputs

    - slice(csr) = csr
    - otherwise, slice generates output with default storage

    .. note:: When input data storage type is csr, it only supports
    step=(), or step=(None,), or step=(1,) to generate a csr output.
    For other step parameter values, it falls back to slicing
    a dense tensor.

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice(x, begin=(0,1), end=(2,4)) = 2., 3., 4.],
    [ 6., 7., 8.

    slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = 11.],
    [5., 7.],
    [1., 3.



    Defined in src/operator/tensor/matrix_op.cc:L412

    returns

    org.apache.mxnet.NDArray

  190. abstract def crop(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Slices a region of the array.

    ..

    Slices a region of the array.

    .. note:: crop is deprecated. Use slice instead.

    This function returns a sliced array between the indices given
    by begin and end with the corresponding step.

    For an input array of shape=(d_0, d_1, ..., d_n-1),
    slice operation with begin=(b_0, b_1...b_m-1),
    end=(e_0, e_1, ..., e_m-1), and step=(s_0, s_1, ..., s_m-1),
    where m <= n, results in an array with the shape
    (|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1).

    The resulting array's *k*-th dimension contains elements
    from the *k*-th dimension of the input array starting
    from index b_k (inclusive) with step s_k
    until reaching e_k (exclusive).

    If the *k*-th elements are None in the sequence of begin, end,
    and step, the following rule will be used to set default values.
    If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k;
    else, set b_k=d_k-1, e_k=-1.

    The storage type of slice output depends on storage types of inputs

    - slice(csr) = csr
    - otherwise, slice generates output with default storage

    .. note:: When input data storage type is csr, it only supports
    step=(), or step=(None,), or step=(1,) to generate a csr output.
    For other step parameter values, it falls back to slicing
    a dense tensor.

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice(x, begin=(0,1), end=(2,4)) = 2., 3., 4.],
    [ 6., 7., 8.

    slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = 11.],
    [5., 7.],
    [1., 3.



    Defined in src/operator/tensor/matrix_op.cc:L412

    returns

    org.apache.mxnet.NDArray

  191. abstract def degrees(args: Any*): NDArrayFuncReturn

    Converts each element of the input array from radians to degrees.

    ..

    Converts each element of the input array from radians to degrees.

    .. math::
    degrees([0, \pi/2, \pi, 3\pi/2, 2\pi]) = [0, 90, 180, 270, 360]

    The storage type of degrees output depends upon the input storage type:

    • degrees(default) = default
    • degrees(row_sparse) = row_sparse
    • degrees(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L163
    returns

    org.apache.mxnet.NDArray

  192. abstract def degrees(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Converts each element of the input array from radians to degrees.

    ..

    Converts each element of the input array from radians to degrees.

    .. math::
    degrees([0, \pi/2, \pi, 3\pi/2, 2\pi]) = [0, 90, 180, 270, 360]

    The storage type of degrees output depends upon the input storage type:

    • degrees(default) = default
    • degrees(row_sparse) = row_sparse
    • degrees(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L163
    returns

    org.apache.mxnet.NDArray

  193. abstract def diag(args: Any*): NDArrayFuncReturn

    Extracts a diagonal or constructs a diagonal array.

    diag's behavior depends on the input array dimensions:

    - 1-D arrays: constructs a 2-D array with the input as its diagonal, all other elements are zero
    - 2-D arrays: returns elements in the diagonal as a new 1-D array
    - N-D arrays: not supported yet

    Examples::

    x = 2, 3],
    [4, 5, 6


    diag(x) = [1, 5]

    diag(x, k=1) = [2, 6]

    diag(x, k=-1) = [4]

    x = [1, 2, 3]

    diag(x) = 0, 0],
    [0, 2, 0],
    [0, 0, 3


    diag(x, k=1) = 1, 0],
    [0, 0, 2],
    [0, 0, 0


    diag(x, k=-1) = 0, 0],
    [1, 0, 0],
    [0, 2, 0




    Defined in src/operator/tensor/diag_op.cc:L68

    Extracts a diagonal or constructs a diagonal array.

    diag's behavior depends on the input array dimensions:

    - 1-D arrays: constructs a 2-D array with the input as its diagonal, all other elements are zero
    - 2-D arrays: returns elements in the diagonal as a new 1-D array
    - N-D arrays: not supported yet

    Examples::

    x = 2, 3],
    [4, 5, 6


    diag(x) = [1, 5]

    diag(x, k=1) = [2, 6]

    diag(x, k=-1) = [4]

    x = [1, 2, 3]

    diag(x) = 0, 0],
    [0, 2, 0],
    [0, 0, 3


    diag(x, k=1) = 1, 0],
    [0, 0, 2],
    [0, 0, 0


    diag(x, k=-1) = 0, 0],
    [1, 0, 0],
    [0, 2, 0




    Defined in src/operator/tensor/diag_op.cc:L68

    returns

    org.apache.mxnet.NDArray

  194. abstract def diag(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Extracts a diagonal or constructs a diagonal array.

    diag's behavior depends on the input array dimensions:

    - 1-D arrays: constructs a 2-D array with the input as its diagonal, all other elements are zero
    - 2-D arrays: returns elements in the diagonal as a new 1-D array
    - N-D arrays: not supported yet

    Examples::

    x = 2, 3],
    [4, 5, 6


    diag(x) = [1, 5]

    diag(x, k=1) = [2, 6]

    diag(x, k=-1) = [4]

    x = [1, 2, 3]

    diag(x) = 0, 0],
    [0, 2, 0],
    [0, 0, 3


    diag(x, k=1) = 1, 0],
    [0, 0, 2],
    [0, 0, 0


    diag(x, k=-1) = 0, 0],
    [1, 0, 0],
    [0, 2, 0




    Defined in src/operator/tensor/diag_op.cc:L68

    Extracts a diagonal or constructs a diagonal array.

    diag's behavior depends on the input array dimensions:

    - 1-D arrays: constructs a 2-D array with the input as its diagonal, all other elements are zero
    - 2-D arrays: returns elements in the diagonal as a new 1-D array
    - N-D arrays: not supported yet

    Examples::

    x = 2, 3],
    [4, 5, 6


    diag(x) = [1, 5]

    diag(x, k=1) = [2, 6]

    diag(x, k=-1) = [4]

    x = [1, 2, 3]

    diag(x) = 0, 0],
    [0, 2, 0],
    [0, 0, 3


    diag(x, k=1) = 1, 0],
    [0, 0, 2],
    [0, 0, 0


    diag(x, k=-1) = 0, 0],
    [1, 0, 0],
    [0, 2, 0




    Defined in src/operator/tensor/diag_op.cc:L68

    returns

    org.apache.mxnet.NDArray

  195. abstract def dot(args: Any*): NDArrayFuncReturn

    Dot product of two arrays.

    dot's behavior depends on the input array dimensions:

    - 1-D arrays: inner product of vectors
    - 2-D arrays: matrix multiplication
    - N-D arrays: a sum product over the last axis of the first input and the first
    axis of the second input

    For example, given 3-D x with shape (n,m,k) and y with shape (k,r,s), the
    result array will have shape (n,m,r,s).

    Dot product of two arrays.

    dot's behavior depends on the input array dimensions:

    - 1-D arrays: inner product of vectors
    - 2-D arrays: matrix multiplication
    - N-D arrays: a sum product over the last axis of the first input and the first
    axis of the second input

    For example, given 3-D x with shape (n,m,k) and y with shape (k,r,s), the
    result array will have shape (n,m,r,s). It is computed by::

    dot(x,y)[i,j,a,b] = sum(x[i,j,:]*y[:,a,b])

    Example::

    x = reshape([0,1,2,3,4,5,6,7], shape=(2,2,2))
    y = reshape([7,6,5,4,3,2,1,0], shape=(2,2,2))
    dot(x,y)[0,0,1,1] = 0
    sum(x[0,0,:]*y[:,1,1]) = 0

    The storage type of dot output depends on storage types of inputs, transpose option and
    forward_stype option for output storage type. Implemented sparse operations include:

    - dot(default, default, transpose_a=True/False, transpose_b=True/False) = default
    - dot(csr, default, transpose_a=True) = default
    - dot(csr, default, transpose_a=True) = row_sparse
    - dot(csr, default) = default
    - dot(csr, row_sparse) = default
    - dot(default, csr) = csr (CPU only)
    - dot(default, csr, forward_stype='default') = default
    - dot(default, csr, transpose_b=True, forward_stype='default') = default

    If the combination of input storage types and forward_stype does not match any of the
    above patterns, dot will fallback and generate output with default storage.

    .. Note::

    If the storage type of the lhs is "csr", the storage type of gradient w.r.t rhs will be
    "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
    and Adam. Note that by default lazy updates is turned on, which may perform differently
    from standard updates. For more details, please check the Optimization API at:
    https://mxnet.incubator.apache.org/api/python/optimization/optimization.html



    Defined in src/operator/tensor/dot.cc:L77

    returns

    org.apache.mxnet.NDArray

  196. abstract def dot(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Dot product of two arrays.

    dot's behavior depends on the input array dimensions:

    - 1-D arrays: inner product of vectors
    - 2-D arrays: matrix multiplication
    - N-D arrays: a sum product over the last axis of the first input and the first
    axis of the second input

    For example, given 3-D x with shape (n,m,k) and y with shape (k,r,s), the
    result array will have shape (n,m,r,s).

    Dot product of two arrays.

    dot's behavior depends on the input array dimensions:

    - 1-D arrays: inner product of vectors
    - 2-D arrays: matrix multiplication
    - N-D arrays: a sum product over the last axis of the first input and the first
    axis of the second input

    For example, given 3-D x with shape (n,m,k) and y with shape (k,r,s), the
    result array will have shape (n,m,r,s). It is computed by::

    dot(x,y)[i,j,a,b] = sum(x[i,j,:]*y[:,a,b])

    Example::

    x = reshape([0,1,2,3,4,5,6,7], shape=(2,2,2))
    y = reshape([7,6,5,4,3,2,1,0], shape=(2,2,2))
    dot(x,y)[0,0,1,1] = 0
    sum(x[0,0,:]*y[:,1,1]) = 0

    The storage type of dot output depends on storage types of inputs, transpose option and
    forward_stype option for output storage type. Implemented sparse operations include:

    - dot(default, default, transpose_a=True/False, transpose_b=True/False) = default
    - dot(csr, default, transpose_a=True) = default
    - dot(csr, default, transpose_a=True) = row_sparse
    - dot(csr, default) = default
    - dot(csr, row_sparse) = default
    - dot(default, csr) = csr (CPU only)
    - dot(default, csr, forward_stype='default') = default
    - dot(default, csr, transpose_b=True, forward_stype='default') = default

    If the combination of input storage types and forward_stype does not match any of the
    above patterns, dot will fallback and generate output with default storage.

    .. Note::

    If the storage type of the lhs is "csr", the storage type of gradient w.r.t rhs will be
    "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
    and Adam. Note that by default lazy updates is turned on, which may perform differently
    from standard updates. For more details, please check the Optimization API at:
    https://mxnet.incubator.apache.org/api/python/optimization/optimization.html



    Defined in src/operator/tensor/dot.cc:L77

    returns

    org.apache.mxnet.NDArray

  197. abstract def elemwise_add(args: Any*): NDArrayFuncReturn

    Adds arguments element-wise.

    The storage type of elemwise_add output depends on storage types of inputs

    Adds arguments element-wise.

    The storage type of elemwise_add output depends on storage types of inputs

    • elemwise_add(row_sparse, row_sparse) = row_sparse
    • elemwise_add(csr, csr) = csr
    • elemwise_add(default, csr) = default
    • elemwise_add(csr, default) = default
    • elemwise_add(default, rsp) = default
    • elemwise_add(rsp, default) = default
    • otherwise, elemwise_add generates output with default storage
    returns

    org.apache.mxnet.NDArray

  198. abstract def elemwise_add(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Adds arguments element-wise.

    The storage type of elemwise_add output depends on storage types of inputs

    Adds arguments element-wise.

    The storage type of elemwise_add output depends on storage types of inputs

    • elemwise_add(row_sparse, row_sparse) = row_sparse
    • elemwise_add(csr, csr) = csr
    • elemwise_add(default, csr) = default
    • elemwise_add(csr, default) = default
    • elemwise_add(default, rsp) = default
    • elemwise_add(rsp, default) = default
    • otherwise, elemwise_add generates output with default storage
    returns

    org.apache.mxnet.NDArray

  199. abstract def elemwise_div(args: Any*): NDArrayFuncReturn

    Divides arguments element-wise.

    The storage type of elemwise_div output is always dense

    Divides arguments element-wise.

    The storage type of elemwise_div output is always dense

    returns

    org.apache.mxnet.NDArray

  200. abstract def elemwise_div(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Divides arguments element-wise.

    The storage type of elemwise_div output is always dense

    Divides arguments element-wise.

    The storage type of elemwise_div output is always dense

    returns

    org.apache.mxnet.NDArray

  201. abstract def elemwise_mul(args: Any*): NDArrayFuncReturn

    Multiplies arguments element-wise.

    The storage type of elemwise_mul output depends on storage types of inputs

    Multiplies arguments element-wise.

    The storage type of elemwise_mul output depends on storage types of inputs

    • elemwise_mul(default, default) = default
    • elemwise_mul(row_sparse, row_sparse) = row_sparse
    • elemwise_mul(default, row_sparse) = row_sparse
    • elemwise_mul(row_sparse, default) = row_sparse
    • elemwise_mul(csr, csr) = csr
    • otherwise, elemwise_mul generates output with default storage
    returns

    org.apache.mxnet.NDArray

  202. abstract def elemwise_mul(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Multiplies arguments element-wise.

    The storage type of elemwise_mul output depends on storage types of inputs

    Multiplies arguments element-wise.

    The storage type of elemwise_mul output depends on storage types of inputs

    • elemwise_mul(default, default) = default
    • elemwise_mul(row_sparse, row_sparse) = row_sparse
    • elemwise_mul(default, row_sparse) = row_sparse
    • elemwise_mul(row_sparse, default) = row_sparse
    • elemwise_mul(csr, csr) = csr
    • otherwise, elemwise_mul generates output with default storage
    returns

    org.apache.mxnet.NDArray

  203. abstract def elemwise_sub(args: Any*): NDArrayFuncReturn

    Subtracts arguments element-wise.

    The storage type of elemwise_sub output depends on storage types of inputs

    Subtracts arguments element-wise.

    The storage type of elemwise_sub output depends on storage types of inputs

    • elemwise_sub(row_sparse, row_sparse) = row_sparse
    • elemwise_sub(csr, csr) = csr
    • elemwise_sub(default, csr) = default
    • elemwise_sub(csr, default) = default
    • elemwise_sub(default, rsp) = default
    • elemwise_sub(rsp, default) = default
    • otherwise, elemwise_sub generates output with default storage
    returns

    org.apache.mxnet.NDArray

  204. abstract def elemwise_sub(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Subtracts arguments element-wise.

    The storage type of elemwise_sub output depends on storage types of inputs

    Subtracts arguments element-wise.

    The storage type of elemwise_sub output depends on storage types of inputs

    • elemwise_sub(row_sparse, row_sparse) = row_sparse
    • elemwise_sub(csr, csr) = csr
    • elemwise_sub(default, csr) = default
    • elemwise_sub(csr, default) = default
    • elemwise_sub(default, rsp) = default
    • elemwise_sub(rsp, default) = default
    • otherwise, elemwise_sub generates output with default storage
    returns

    org.apache.mxnet.NDArray

  205. abstract def exp(args: Any*): NDArrayFuncReturn

    Returns element-wise exponential value of the input.

    ..

    Returns element-wise exponential value of the input.

    .. math::
    exp(x) = ex \approx 2.718x

    Example::

    exp([0, 1, 2]) = [1., 2.71828175, 7.38905621]

    The storage type of exp output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L839

    returns

    org.apache.mxnet.NDArray

  206. abstract def exp(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise exponential value of the input.

    ..

    Returns element-wise exponential value of the input.

    .. math::
    exp(x) = ex \approx 2.718x

    Example::

    exp([0, 1, 2]) = [1., 2.71828175, 7.38905621]

    The storage type of exp output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L839

    returns

    org.apache.mxnet.NDArray

  207. abstract def expand_dims(args: Any*): NDArrayFuncReturn

    Inserts a new axis of size 1 into the array shape

    For example, given x with shape (2,3,4), then expand_dims(x, axis=1)
    will return a new array with shape (2,1,3,4).



    Defined in src/operator/tensor/matrix_op.cc:L346

    Inserts a new axis of size 1 into the array shape

    For example, given x with shape (2,3,4), then expand_dims(x, axis=1)
    will return a new array with shape (2,1,3,4).



    Defined in src/operator/tensor/matrix_op.cc:L346

    returns

    org.apache.mxnet.NDArray

  208. abstract def expand_dims(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Inserts a new axis of size 1 into the array shape

    For example, given x with shape (2,3,4), then expand_dims(x, axis=1)
    will return a new array with shape (2,1,3,4).



    Defined in src/operator/tensor/matrix_op.cc:L346

    Inserts a new axis of size 1 into the array shape

    For example, given x with shape (2,3,4), then expand_dims(x, axis=1)
    will return a new array with shape (2,1,3,4).



    Defined in src/operator/tensor/matrix_op.cc:L346

    returns

    org.apache.mxnet.NDArray

  209. abstract def expm1(args: Any*): NDArrayFuncReturn

    Returns exp(x) - 1 computed element-wise on the input.

    This function provides greater precision than exp(x) - 1 for small values of x.

    The storage type of expm1 output depends upon the input storage type:

    Returns exp(x) - 1 computed element-wise on the input.

    This function provides greater precision than exp(x) - 1 for small values of x.

    The storage type of expm1 output depends upon the input storage type:

    • expm1(default) = default
    • expm1(row_sparse) = row_sparse
    • expm1(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L918
    returns

    org.apache.mxnet.NDArray

  210. abstract def expm1(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns exp(x) - 1 computed element-wise on the input.

    This function provides greater precision than exp(x) - 1 for small values of x.

    The storage type of expm1 output depends upon the input storage type:

    Returns exp(x) - 1 computed element-wise on the input.

    This function provides greater precision than exp(x) - 1 for small values of x.

    The storage type of expm1 output depends upon the input storage type:

    • expm1(default) = default
    • expm1(row_sparse) = row_sparse
    • expm1(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L918
    returns

    org.apache.mxnet.NDArray

  211. abstract def fill_element_0index(args: Any*): NDArrayFuncReturn

    Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs.

    Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs. This function assume rhs uses 0-based index.

    returns

    org.apache.mxnet.NDArray

  212. abstract def fill_element_0index(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs.

    Fill one element of each line(row for python, column for R/Julia) in lhs according to index indicated by rhs and values indicated by mhs. This function assume rhs uses 0-based index.

    returns

    org.apache.mxnet.NDArray

  213. abstract def fix(args: Any*): NDArrayFuncReturn

    Returns element-wise rounded value to the nearest \
    integer towards zero of the input.

    Example::

    fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1., 1., 2.]

    The storage type of fix output depends upon the input storage type:

    Returns element-wise rounded value to the nearest \
    integer towards zero of the input.

    Example::

    fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1., 1., 2.]

    The storage type of fix output depends upon the input storage type:

    • fix(default) = default
    • fix(row_sparse) = row_sparse
    • fix(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L713
    returns

    org.apache.mxnet.NDArray

  214. abstract def fix(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise rounded value to the nearest \
    integer towards zero of the input.

    Example::

    fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1., 1., 2.]

    The storage type of fix output depends upon the input storage type:

    Returns element-wise rounded value to the nearest \
    integer towards zero of the input.

    Example::

    fix([-2.1, -1.9, 1.9, 2.1]) = [-2., -1., 1., 2.]

    The storage type of fix output depends upon the input storage type:

    • fix(default) = default
    • fix(row_sparse) = row_sparse
    • fix(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L713
    returns

    org.apache.mxnet.NDArray

  215. abstract def flatten(args: Any*): NDArrayFuncReturn

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    ..

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    .. note:: Flatten is deprecated. Use flatten instead.

    For an input array with shape (d1, d2, ..., dk), flatten operation reshapes
    the input array into an output array of shape (d1, d2*...*dk).

    Note that the bahavior of this function is different from numpy.ndarray.flatten,
    which behaves similar to mxnet.ndarray.reshape((-1,)).

    Example::

    x = [1,2,3],
    [4,5,6],
    [7,8,9]
    ],
    [ [1,2,3],
    [4,5,6],
    [7,8,9]
    ,

    flatten(x) = 1., 2., 3., 4., 5., 6., 7., 8., 9.],
    [ 1., 2., 3., 4., 5., 6., 7., 8., 9.




    Defined in src/operator/tensor/matrix_op.cc:L258

    returns

    org.apache.mxnet.NDArray

  216. abstract def flatten(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    ..

    Flattens the input array into a 2-D array by collapsing the higher dimensions.

    .. note:: Flatten is deprecated. Use flatten instead.

    For an input array with shape (d1, d2, ..., dk), flatten operation reshapes
    the input array into an output array of shape (d1, d2*...*dk).

    Note that the bahavior of this function is different from numpy.ndarray.flatten,
    which behaves similar to mxnet.ndarray.reshape((-1,)).

    Example::

    x = [1,2,3],
    [4,5,6],
    [7,8,9]
    ],
    [ [1,2,3],
    [4,5,6],
    [7,8,9]
    ,

    flatten(x) = 1., 2., 3., 4., 5., 6., 7., 8., 9.],
    [ 1., 2., 3., 4., 5., 6., 7., 8., 9.




    Defined in src/operator/tensor/matrix_op.cc:L258

    returns

    org.apache.mxnet.NDArray

  217. abstract def flip(args: Any*): NDArrayFuncReturn

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent.

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent. We use reverse in the following examples.

    Examples::

    x = 0., 1., 2., 3., 4.],
    [ 5., 6., 7., 8., 9.


    reverse(x, axis=0) = 5., 6., 7., 8., 9.],
    [ 0., 1., 2., 3., 4.


    reverse(x, axis=1) = 4., 3., 2., 1., 0.],
    [ 9., 8., 7., 6., 5.



    Defined in src/operator/tensor/matrix_op.cc:L792

    returns

    org.apache.mxnet.NDArray

  218. abstract def flip(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent.

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent. We use reverse in the following examples.

    Examples::

    x = 0., 1., 2., 3., 4.],
    [ 5., 6., 7., 8., 9.


    reverse(x, axis=0) = 5., 6., 7., 8., 9.],
    [ 0., 1., 2., 3., 4.


    reverse(x, axis=1) = 4., 3., 2., 1., 0.],
    [ 9., 8., 7., 6., 5.



    Defined in src/operator/tensor/matrix_op.cc:L792

    returns

    org.apache.mxnet.NDArray

  219. abstract def floor(args: Any*): NDArrayFuncReturn

    Returns element-wise floor of the input.

    The floor of the scalar x is the largest integer i, such that i <= x.

    Example::

    floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2., 1., 1., 2.]

    The storage type of floor output depends upon the input storage type:

    Returns element-wise floor of the input.

    The floor of the scalar x is the largest integer i, such that i <= x.

    Example::

    floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2., 1., 1., 2.]

    The storage type of floor output depends upon the input storage type:

    • floor(default) = default
    • floor(row_sparse) = row_sparse
    • floor(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L675
    returns

    org.apache.mxnet.NDArray

  220. abstract def floor(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise floor of the input.

    The floor of the scalar x is the largest integer i, such that i <= x.

    Example::

    floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2., 1., 1., 2.]

    The storage type of floor output depends upon the input storage type:

    Returns element-wise floor of the input.

    The floor of the scalar x is the largest integer i, such that i <= x.

    Example::

    floor([-2.1, -1.9, 1.5, 1.9, 2.1]) = [-3., -2., 1., 1., 2.]

    The storage type of floor output depends upon the input storage type:

    • floor(default) = default
    • floor(row_sparse) = row_sparse
    • floor(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L675
    returns

    org.apache.mxnet.NDArray

  221. abstract def ftml_update(args: Any*): NDArrayFuncReturn

    The FTML optimizer described in
    *FTML - Follow the Moving Leader in Deep Learning*,
    available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf.

    ..

    The FTML optimizer described in
    *FTML - Follow the Moving Leader in Deep Learning*,
    available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf.

    .. math::

    g_t = \nabla J(W_{t-1})\\
    v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t2\\
    d_t = \frac{ 1 - \beta_1
    t }{ \eta_t } (\sqrt{ \frac{ v_t }{ 1 - \beta_2t } } + \epsilon)
    \sigma_t = d_t - \beta_1 d_{t-1}
    z_t = \beta_1 z_{ t-1 } + (1 - \beta_1
    t) g_t - \sigma_t W_{t-1}
    W_t = - \frac{ z_t }{ d_t }



    Defined in src/operator/optimizer_op.cc:L447

    returns

    org.apache.mxnet.NDArray

  222. abstract def ftml_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    The FTML optimizer described in
    *FTML - Follow the Moving Leader in Deep Learning*,
    available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf.

    ..

    The FTML optimizer described in
    *FTML - Follow the Moving Leader in Deep Learning*,
    available at http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf.

    .. math::

    g_t = \nabla J(W_{t-1})\\
    v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t2\\
    d_t = \frac{ 1 - \beta_1
    t }{ \eta_t } (\sqrt{ \frac{ v_t }{ 1 - \beta_2t } } + \epsilon)
    \sigma_t = d_t - \beta_1 d_{t-1}
    z_t = \beta_1 z_{ t-1 } + (1 - \beta_1
    t) g_t - \sigma_t W_{t-1}
    W_t = - \frac{ z_t }{ d_t }



    Defined in src/operator/optimizer_op.cc:L447

    returns

    org.apache.mxnet.NDArray

  223. abstract def ftrl_update(args: Any*): NDArrayFuncReturn

    Update function for Ftrl optimizer.
    Referenced from *Ad Click Prediction: a View from the Trenches*, available at
    http://dl.acm.org/citation.cfm?id=2488200.

    It updates the weights using::

    rescaled_grad = clip(grad * rescale_grad, clip_gradient)
    z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate
    n += rescaled_grad**2
    w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)

    If w, z and n are all of row_sparse storage type,
    only the row slices whose indices appear in grad.indices are updated (for w, z and n)::

    for row in grad.indices:
    rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient)
    z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate
    n[row] += rescaled_grad[row]**2
    w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)



    Defined in src/operator/optimizer_op.cc:L632

    Update function for Ftrl optimizer.
    Referenced from *Ad Click Prediction: a View from the Trenches*, available at
    http://dl.acm.org/citation.cfm?id=2488200.

    It updates the weights using::

    rescaled_grad = clip(grad * rescale_grad, clip_gradient)
    z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate
    n += rescaled_grad**2
    w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)

    If w, z and n are all of row_sparse storage type,
    only the row slices whose indices appear in grad.indices are updated (for w, z and n)::

    for row in grad.indices:
    rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient)
    z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate
    n[row] += rescaled_grad[row]**2
    w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)



    Defined in src/operator/optimizer_op.cc:L632

    returns

    org.apache.mxnet.NDArray

  224. abstract def ftrl_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Update function for Ftrl optimizer.
    Referenced from *Ad Click Prediction: a View from the Trenches*, available at
    http://dl.acm.org/citation.cfm?id=2488200.

    It updates the weights using::

    rescaled_grad = clip(grad * rescale_grad, clip_gradient)
    z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate
    n += rescaled_grad**2
    w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)

    If w, z and n are all of row_sparse storage type,
    only the row slices whose indices appear in grad.indices are updated (for w, z and n)::

    for row in grad.indices:
    rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient)
    z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate
    n[row] += rescaled_grad[row]**2
    w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)



    Defined in src/operator/optimizer_op.cc:L632

    Update function for Ftrl optimizer.
    Referenced from *Ad Click Prediction: a View from the Trenches*, available at
    http://dl.acm.org/citation.cfm?id=2488200.

    It updates the weights using::

    rescaled_grad = clip(grad * rescale_grad, clip_gradient)
    z += rescaled_grad - (sqrt(n + rescaled_grad**2) - sqrt(n)) * weight / learning_rate
    n += rescaled_grad**2
    w = (sign(z) * lamda1 - z) / ((beta + sqrt(n)) / learning_rate + wd) * (abs(z) > lamda1)

    If w, z and n are all of row_sparse storage type,
    only the row slices whose indices appear in grad.indices are updated (for w, z and n)::

    for row in grad.indices:
    rescaled_grad[row] = clip(grad[row] * rescale_grad, clip_gradient)
    z[row] += rescaled_grad[row] - (sqrt(n[row] + rescaled_grad[row]**2) - sqrt(n[row])) * weight[row] / learning_rate
    n[row] += rescaled_grad[row]**2
    w[row] = (sign(z[row]) * lamda1 - z[row]) / ((beta + sqrt(n[row])) / learning_rate + wd) * (abs(z[row]) > lamda1)



    Defined in src/operator/optimizer_op.cc:L632

    returns

    org.apache.mxnet.NDArray

  225. abstract def gamma(args: Any*): NDArrayFuncReturn

    Returns the gamma function (extension of the factorial function \
    to the reals), computed element-wise on the input array.

    The storage type of gamma output is always dense

    Returns the gamma function (extension of the factorial function \
    to the reals), computed element-wise on the input array.

    The storage type of gamma output is always dense

    returns

    org.apache.mxnet.NDArray

  226. abstract def gamma(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the gamma function (extension of the factorial function \
    to the reals), computed element-wise on the input array.

    The storage type of gamma output is always dense

    Returns the gamma function (extension of the factorial function \
    to the reals), computed element-wise on the input array.

    The storage type of gamma output is always dense

    returns

    org.apache.mxnet.NDArray

  227. abstract def gammaln(args: Any*): NDArrayFuncReturn

    Returns element-wise log of the absolute value of the gamma function \
    of the input.

    The storage type of gammaln output is always dense

    Returns element-wise log of the absolute value of the gamma function \
    of the input.

    The storage type of gammaln output is always dense

    returns

    org.apache.mxnet.NDArray

  228. abstract def gammaln(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise log of the absolute value of the gamma function \
    of the input.

    The storage type of gammaln output is always dense

    Returns element-wise log of the absolute value of the gamma function \
    of the input.

    The storage type of gammaln output is always dense

    returns

    org.apache.mxnet.NDArray

  229. abstract def gather_nd(args: Any*): NDArrayFuncReturn

    Gather elements or slices from data and store to a tensor whose
    shape is defined by indices.

    Given data with shape (X_0, X_1, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}),
    where M <= N.

    Gather elements or slices from data and store to a tensor whose
    shape is defined by indices.

    Given data with shape (X_0, X_1, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}),
    where M <= N. If M == N, output shape will simply be (Y_0, ..., Y_{K-1}).

    The elements in output is defined as follows::

    output[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}] = data[indices[0, y_0, ..., y_{K-1}],
    ...,
    indices[M-1, y_0, ..., y_{K-1}],
    x_M, ..., x_{N-1}]

    Examples::

    data = 1], [2, 3
    indices = 1, 0], [0, 1, 0
    gather_nd(data, indices) = [2, 3, 0]

    returns

    org.apache.mxnet.NDArray

  230. abstract def gather_nd(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Gather elements or slices from data and store to a tensor whose
    shape is defined by indices.

    Given data with shape (X_0, X_1, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}),
    where M <= N.

    Gather elements or slices from data and store to a tensor whose
    shape is defined by indices.

    Given data with shape (X_0, X_1, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}),
    where M <= N. If M == N, output shape will simply be (Y_0, ..., Y_{K-1}).

    The elements in output is defined as follows::

    output[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}] = data[indices[0, y_0, ..., y_{K-1}],
    ...,
    indices[M-1, y_0, ..., y_{K-1}],
    x_M, ..., x_{N-1}]

    Examples::

    data = 1], [2, 3
    indices = 1, 0], [0, 1, 0
    gather_nd(data, indices) = [2, 3, 0]

    returns

    org.apache.mxnet.NDArray

  231. abstract def hard_sigmoid(args: Any*): NDArrayFuncReturn

    Computes hard sigmoid of x element-wise.

    ..

    Computes hard sigmoid of x element-wise.

    .. math::
    y = max(0, min(1, alpha * x + beta))



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L115

    returns

    org.apache.mxnet.NDArray

  232. abstract def hard_sigmoid(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes hard sigmoid of x element-wise.

    ..

    Computes hard sigmoid of x element-wise.

    .. math::
    y = max(0, min(1, alpha * x + beta))



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L115

    returns

    org.apache.mxnet.NDArray

  233. abstract def identity(args: Any*): NDArrayFuncReturn

    Returns a copy of the input.

    From:src/operator/tensor/elemwise_unary_op_basic.cc:200

    Returns a copy of the input.

    From:src/operator/tensor/elemwise_unary_op_basic.cc:200

    returns

    org.apache.mxnet.NDArray

  234. abstract def identity(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns a copy of the input.

    From:src/operator/tensor/elemwise_unary_op_basic.cc:200

    Returns a copy of the input.

    From:src/operator/tensor/elemwise_unary_op_basic.cc:200

    returns

    org.apache.mxnet.NDArray

  235. abstract def khatri_rao(args: Any*): NDArrayFuncReturn

    Computes the Khatri-Rao product of the input matrices.

    Given a collection of :math:n input matrices,

    ..

    Computes the Khatri-Rao product of the input matrices.

    Given a collection of :math:n input matrices,

    .. math::
    A_1 \in \mathbb{R}{M_1 \times M}, \ldots, A_n \in \mathbb{R}{M_n \times N},

    the (column-wise) Khatri-Rao product is defined as the matrix,

    .. math::
    X = A_1 \otimes \cdots \otimes A_n \in \mathbb{R}^{(M_1 \cdots M_n) \times N},

    where the :math:k th column is equal to the column-wise outer product
    :math:{A_1}_k \otimes \cdots \otimes {A_n}_k where :math:{A_i}_k is the kth
    column of the ith matrix.

    Example::

    >>> A = mx.nd.array(-1],
    >>> [2, -3
    )
    >>> B = mx.nd.array(4],
    >>> [2, 5],
    >>> [3, 6
    )
    >>> C = mx.nd.khatri_rao(A, B)
    >>> print(C.asnumpy())
    1. -4.]
    [ 2. -5.]
    [ 3. -6.]
    [ 2. -12.]
    [ 4. -15.]
    [ 6. -18.




    Defined in src/operator/contrib/krprod.cc:L108

    returns

    org.apache.mxnet.NDArray

  236. abstract def khatri_rao(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the Khatri-Rao product of the input matrices.

    Given a collection of :math:n input matrices,

    ..

    Computes the Khatri-Rao product of the input matrices.

    Given a collection of :math:n input matrices,

    .. math::
    A_1 \in \mathbb{R}{M_1 \times M}, \ldots, A_n \in \mathbb{R}{M_n \times N},

    the (column-wise) Khatri-Rao product is defined as the matrix,

    .. math::
    X = A_1 \otimes \cdots \otimes A_n \in \mathbb{R}^{(M_1 \cdots M_n) \times N},

    where the :math:k th column is equal to the column-wise outer product
    :math:{A_1}_k \otimes \cdots \otimes {A_n}_k where :math:{A_i}_k is the kth
    column of the ith matrix.

    Example::

    >>> A = mx.nd.array(-1],
    >>> [2, -3
    )
    >>> B = mx.nd.array(4],
    >>> [2, 5],
    >>> [3, 6
    )
    >>> C = mx.nd.khatri_rao(A, B)
    >>> print(C.asnumpy())
    1. -4.]
    [ 2. -5.]
    [ 3. -6.]
    [ 2. -12.]
    [ 4. -15.]
    [ 6. -18.




    Defined in src/operator/contrib/krprod.cc:L108

    returns

    org.apache.mxnet.NDArray

  237. abstract def linalg_gelqf(args: Any*): NDArrayFuncReturn

    LQ factorization for general matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, we compute the LQ factorization (LAPACK *gelqf*, followed by *orglq*).

    LQ factorization for general matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, we compute the LQ factorization (LAPACK *gelqf*, followed by *orglq*). *A*
    must have shape *(x, y)* with *x <= y*, and must have full rank *=x*. The LQ
    factorization consists of *L* with shape *(x, x)* and *Q* with shape *(x, y)*, so
    that:

    *A* = *L* \* *Q*

    Here, *L* is lower triangular (upper triangle equal to zero) with nonzero diagonal,
    and *Q* is row-orthonormal, meaning that

    *Q* \* *Q*\ :sup:T

    is equal to the identity matrix of shape *(x, x)*.

    If *n>2*, *gelqf* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single LQ factorization
    A = 2., 3.], [4., 5., 6.
    Q, L = gelqf(A)
    Q = -0.53452248, -0.80178373],
    [0.87287156, 0.21821789, -0.43643578

    L = 0.],
    [-8.55235974, 1.96396101


    // Batch LQ factorization
    A = 2., 3.], [4., 5., 6.]],
    8., 9.], [10., 11., 12.]
    Q, L = gelqf(A)
    Q = -0.53452248, -0.80178373],
    [0.87287156, 0.21821789, -0.43643578]],
    -0.57436653, -0.64616234],
    [0.7620735, 0.05862104, -0.64483142
    ]
    L = 0.],
    [-8.55235974, 1.96396101]],
    0.],
    [-19.09768702, 0.52758934
    ]


    Defined in src/operator/tensor/la_op.cc:L552

    returns

    org.apache.mxnet.NDArray

  238. abstract def linalg_gelqf(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    LQ factorization for general matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, we compute the LQ factorization (LAPACK *gelqf*, followed by *orglq*).

    LQ factorization for general matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, we compute the LQ factorization (LAPACK *gelqf*, followed by *orglq*). *A*
    must have shape *(x, y)* with *x <= y*, and must have full rank *=x*. The LQ
    factorization consists of *L* with shape *(x, x)* and *Q* with shape *(x, y)*, so
    that:

    *A* = *L* \* *Q*

    Here, *L* is lower triangular (upper triangle equal to zero) with nonzero diagonal,
    and *Q* is row-orthonormal, meaning that

    *Q* \* *Q*\ :sup:T

    is equal to the identity matrix of shape *(x, x)*.

    If *n>2*, *gelqf* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single LQ factorization
    A = 2., 3.], [4., 5., 6.
    Q, L = gelqf(A)
    Q = -0.53452248, -0.80178373],
    [0.87287156, 0.21821789, -0.43643578

    L = 0.],
    [-8.55235974, 1.96396101


    // Batch LQ factorization
    A = 2., 3.], [4., 5., 6.]],
    8., 9.], [10., 11., 12.]
    Q, L = gelqf(A)
    Q = -0.53452248, -0.80178373],
    [0.87287156, 0.21821789, -0.43643578]],
    -0.57436653, -0.64616234],
    [0.7620735, 0.05862104, -0.64483142
    ]
    L = 0.],
    [-8.55235974, 1.96396101]],
    0.],
    [-19.09768702, 0.52758934
    ]


    Defined in src/operator/tensor/la_op.cc:L552

    returns

    org.apache.mxnet.NDArray

  239. abstract def linalg_gemm(args: Any*): NDArrayFuncReturn

    Performs general matrix multiplication and accumulation.
    Input are tensors *A*, *B*, *C*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*) + *beta* \* *C*

    Here, *alpha* and *beta* are scalar parameters, and *op()* is either the identity or
    matrix transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices.

    Performs general matrix multiplication and accumulation.
    Input are tensors *A*, *B*, *C*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*) + *beta* \* *C*

    Here, *alpha* and *beta* are scalar parameters, and *op()* is either the identity or
    matrix transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices
    are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis*
    parameter. By default, the trailing two dimensions will be used for matrix encoding.

    For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes
    calls. For example let *A*, *B*, *C* be 5 dimensional tensors. Then gemm(*A*, *B*, *C*, axis=1) is equivalent to

    A1 = swapaxes(A, dim1=1, dim2=3)
    B1 = swapaxes(B, dim1=1, dim2=3)
    C = swapaxes(C, dim1=1, dim2=3)
    C = gemm(A1, B1, C)
    C = swapaxis(C, dim1=1, dim2=3)

    without the overhead of the additional swapaxis operations.

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix multiply-add
    A = 1.0], [1.0, 1.0
    B = 1.0], [1.0, 1.0], [1.0, 1.0
    C = 1.0, 1.0], [1.0, 1.0, 1.0
    gemm(A, B, C, transpose_b=True, alpha=2.0, beta=10.0)

    14.0, 14.0], [14.0, 14.0, 14.0

    // Batch matrix multiply-add
    A

    B = 1.0]], 0.1]
    C = 0.01]
    gemm(A, B, C, transpose_b=True, alpha=2.0 , beta=10.0)

    0.14]


    Defined in src/operator/tensor/la_op.cc:L81

    returns

    org.apache.mxnet.NDArray

  240. abstract def linalg_gemm(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs general matrix multiplication and accumulation.
    Input are tensors *A*, *B*, *C*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*) + *beta* \* *C*

    Here, *alpha* and *beta* are scalar parameters, and *op()* is either the identity or
    matrix transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices.

    Performs general matrix multiplication and accumulation.
    Input are tensors *A*, *B*, *C*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*) + *beta* \* *C*

    Here, *alpha* and *beta* are scalar parameters, and *op()* is either the identity or
    matrix transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices
    are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis*
    parameter. By default, the trailing two dimensions will be used for matrix encoding.

    For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes
    calls. For example let *A*, *B*, *C* be 5 dimensional tensors. Then gemm(*A*, *B*, *C*, axis=1) is equivalent to

    A1 = swapaxes(A, dim1=1, dim2=3)
    B1 = swapaxes(B, dim1=1, dim2=3)
    C = swapaxes(C, dim1=1, dim2=3)
    C = gemm(A1, B1, C)
    C = swapaxis(C, dim1=1, dim2=3)

    without the overhead of the additional swapaxis operations.

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix multiply-add
    A = 1.0], [1.0, 1.0
    B = 1.0], [1.0, 1.0], [1.0, 1.0
    C = 1.0, 1.0], [1.0, 1.0, 1.0
    gemm(A, B, C, transpose_b=True, alpha=2.0, beta=10.0)

    14.0, 14.0], [14.0, 14.0, 14.0

    // Batch matrix multiply-add
    A

    B = 1.0]], 0.1]
    C = 0.01]
    gemm(A, B, C, transpose_b=True, alpha=2.0 , beta=10.0)

    0.14]


    Defined in src/operator/tensor/la_op.cc:L81

    returns

    org.apache.mxnet.NDArray

  241. abstract def linalg_gemm2(args: Any*): NDArrayFuncReturn

    Performs general matrix multiplication.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*)

    Here *alpha* is a scalar parameter and *op()* is either the identity or the matrix
    transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices.

    Performs general matrix multiplication.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*)

    Here *alpha* is a scalar parameter and *op()* is either the identity or the matrix
    transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices
    are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis*
    parameter. By default, the trailing two dimensions will be used for matrix encoding.

    For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes
    calls. For example let *A*, *B* be 5 dimensional tensors. Then gemm(*A*, *B*, axis=1) is equivalent to

    A1 = swapaxes(A, dim1=1, dim2=3)
    B1 = swapaxes(B, dim1=1, dim2=3)
    C = gemm2(A1, B1)
    C = swapaxis(C, dim1=1, dim2=3)

    without the overhead of the additional swapaxis operations.

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix multiply
    A = 1.0], [1.0, 1.0
    B = 1.0], [1.0, 1.0], [1.0, 1.0
    gemm2(A, B, transpose_b=True, alpha=2.0)

    4.0, 4.0], [4.0, 4.0, 4.0

    // Batch matrix multiply
    A

    B = 1.0]], 0.1]
    gemm2(A, B, transpose_b=True, alpha=2.0)

    ]


    Defined in src/operator/tensor/la_op.cc:L151

    returns

    org.apache.mxnet.NDArray

  242. abstract def linalg_gemm2(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs general matrix multiplication.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*)

    Here *alpha* is a scalar parameter and *op()* is either the identity or the matrix
    transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices.

    Performs general matrix multiplication.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, the BLAS3 function *gemm* is performed:

    *out* = *alpha* \* *op*\ (*A*) \* *op*\ (*B*)

    Here *alpha* is a scalar parameter and *op()* is either the identity or the matrix
    transposition (depending on *transpose_a*, *transpose_b*).

    If *n>2*, *gemm* is performed separately for a batch of matrices. The column indices of the matrices
    are given by the last dimensions of the tensors, the row indices by the axis specified with the *axis*
    parameter. By default, the trailing two dimensions will be used for matrix encoding.

    For a non-default axis parameter, the operation performed is equivalent to a series of swapaxes/gemm/swapaxes
    calls. For example let *A*, *B* be 5 dimensional tensors. Then gemm(*A*, *B*, axis=1) is equivalent to

    A1 = swapaxes(A, dim1=1, dim2=3)
    B1 = swapaxes(B, dim1=1, dim2=3)
    C = gemm2(A1, B1)
    C = swapaxis(C, dim1=1, dim2=3)

    without the overhead of the additional swapaxis operations.

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix multiply
    A = 1.0], [1.0, 1.0
    B = 1.0], [1.0, 1.0], [1.0, 1.0
    gemm2(A, B, transpose_b=True, alpha=2.0)

    4.0, 4.0], [4.0, 4.0, 4.0

    // Batch matrix multiply
    A

    B = 1.0]], 0.1]
    gemm2(A, B, transpose_b=True, alpha=2.0)

    ]


    Defined in src/operator/tensor/la_op.cc:L151

    returns

    org.apache.mxnet.NDArray

  243. abstract def linalg_potrf(args: Any*): NDArrayFuncReturn

    Performs Cholesky factorization of a symmetric positive-definite matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the Cholesky factor *L* of the symmetric, positive definite matrix *A* is
    computed.

    Performs Cholesky factorization of a symmetric positive-definite matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the Cholesky factor *L* of the symmetric, positive definite matrix *A* is
    computed. *L* is lower triangular (entries of upper triangle are all zero), has
    positive diagonal entries, and:

    *A* = *L* \* *L*\ :sup:T

    If *n>2*, *potrf* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix factorization
    A = 1.0], [1.0, 4.25
    potrf(A) = 0], [0.5, 2.0

    // Batch matrix factorization
    A = 1.0], [1.0, 4.25]], 4.0], [4.0, 17.0]
    potrf(A) = 0], [0.5, 2.0]], 0], [1.0, 4.0]


    Defined in src/operator/tensor/la_op.cc:L201

    returns

    org.apache.mxnet.NDArray

  244. abstract def linalg_potrf(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs Cholesky factorization of a symmetric positive-definite matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the Cholesky factor *L* of the symmetric, positive definite matrix *A* is
    computed.

    Performs Cholesky factorization of a symmetric positive-definite matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the Cholesky factor *L* of the symmetric, positive definite matrix *A* is
    computed. *L* is lower triangular (entries of upper triangle are all zero), has
    positive diagonal entries, and:

    *A* = *L* \* *L*\ :sup:T

    If *n>2*, *potrf* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix factorization
    A = 1.0], [1.0, 4.25
    potrf(A) = 0], [0.5, 2.0

    // Batch matrix factorization
    A = 1.0], [1.0, 4.25]], 4.0], [4.0, 17.0]
    potrf(A) = 0], [0.5, 2.0]], 0], [1.0, 4.0]


    Defined in src/operator/tensor/la_op.cc:L201

    returns

    org.apache.mxnet.NDArray

  245. abstract def linalg_potri(args: Any*): NDArrayFuncReturn

    Performs matrix inversion from a Cholesky factorization.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* is a lower triangular matrix (entries of upper triangle are all zero)
    with positive diagonal.

    Performs matrix inversion from a Cholesky factorization.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* is a lower triangular matrix (entries of upper triangle are all zero)
    with positive diagonal. We compute:

    *out* = *A*\ :sup:-T \* *A*\ :sup:-1

    In other words, if *A* is the Cholesky factor of a symmetric positive definite matrix
    *B* (obtained by *potrf*), then

    *out* = *B*\ :sup:-1

    If *n>2*, *potri* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    .. note:: Use this operator only if you are certain you need the inverse of *B*, and
    cannot use the Cholesky factor *A* (*potrf*), together with backsubstitution
    (*trsm*). The latter is numerically much safer, and also cheaper.

    Examples::

    // Single matrix inverse
    A = 0], [0.5, 2.0
    potri(A) = -0.0625], [-0.0625, 0.25

    // Batch matrix inverse
    A = 0], [0.5, 2.0]], 0], [1.0, 4.0]
    potri(A) = -0.0625], [-0.0625, 0.25]],
    -0.01562], [-0.01562, 0,0625]


    Defined in src/operator/tensor/la_op.cc:L259

    returns

    org.apache.mxnet.NDArray

  246. abstract def linalg_potri(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs matrix inversion from a Cholesky factorization.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* is a lower triangular matrix (entries of upper triangle are all zero)
    with positive diagonal.

    Performs matrix inversion from a Cholesky factorization.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* is a lower triangular matrix (entries of upper triangle are all zero)
    with positive diagonal. We compute:

    *out* = *A*\ :sup:-T \* *A*\ :sup:-1

    In other words, if *A* is the Cholesky factor of a symmetric positive definite matrix
    *B* (obtained by *potrf*), then

    *out* = *B*\ :sup:-1

    If *n>2*, *potri* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    .. note:: Use this operator only if you are certain you need the inverse of *B*, and
    cannot use the Cholesky factor *A* (*potrf*), together with backsubstitution
    (*trsm*). The latter is numerically much safer, and also cheaper.

    Examples::

    // Single matrix inverse
    A = 0], [0.5, 2.0
    potri(A) = -0.0625], [-0.0625, 0.25

    // Batch matrix inverse
    A = 0], [0.5, 2.0]], 0], [1.0, 4.0]
    potri(A) = -0.0625], [-0.0625, 0.25]],
    -0.01562], [-0.01562, 0,0625]


    Defined in src/operator/tensor/la_op.cc:L259

    returns

    org.apache.mxnet.NDArray

  247. abstract def linalg_sumlogdiag(args: Any*): NDArrayFuncReturn

    Computes the sum of the logarithms of the diagonal elements of a square matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* must be square with positive diagonal entries.

    Computes the sum of the logarithms of the diagonal elements of a square matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* must be square with positive diagonal entries. We sum the natural
    logarithms of the diagonal elements, the result has shape (1,).

    If *n>2*, *sumlogdiag* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix reduction
    A = 1.0], [1.0, 7.0
    sumlogdiag(A) = [1.9459]

    // Batch matrix reduction
    A = 1.0], [1.0, 7.0]], 0], [0, 17.0]
    sumlogdiag(A) = [1.9459, 3.9318]


    Defined in src/operator/tensor/la_op.cc:L428

    returns

    org.apache.mxnet.NDArray

  248. abstract def linalg_sumlogdiag(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the sum of the logarithms of the diagonal elements of a square matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* must be square with positive diagonal entries.

    Computes the sum of the logarithms of the diagonal elements of a square matrix.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, *A* must be square with positive diagonal entries. We sum the natural
    logarithms of the diagonal elements, the result has shape (1,).

    If *n>2*, *sumlogdiag* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix reduction
    A = 1.0], [1.0, 7.0
    sumlogdiag(A) = [1.9459]

    // Batch matrix reduction
    A = 1.0], [1.0, 7.0]], 0], [0, 17.0]
    sumlogdiag(A) = [1.9459, 3.9318]


    Defined in src/operator/tensor/la_op.cc:L428

    returns

    org.apache.mxnet.NDArray

  249. abstract def linalg_syrk(args: Any*): NDArrayFuncReturn

    Multiplication of matrix with its transpose.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the operator performs the BLAS3 function *syrk*:

    *out* = *alpha* \* *A* \* *A*\ :sup:T

    if *transpose=False*, or

    *out* = *alpha* \* *A*\ :sup:T \ \* *A*

    if *transpose=True*.

    If *n>2*, *syrk* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    ..

    Multiplication of matrix with its transpose.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the operator performs the BLAS3 function *syrk*:

    *out* = *alpha* \* *A* \* *A*\ :sup:T

    if *transpose=False*, or

    *out* = *alpha* \* *A*\ :sup:T \ \* *A*

    if *transpose=True*.

    If *n>2*, *syrk* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix multiply
    A = 2., 3.], [4., 5., 6.
    syrk(A, alpha=1., transpose=False)

    32.],
    [32., 77.

    syrk(A, alpha

    22., 27.],
    [22., 29., 36.],
    [27., 36., 45.


    // Batch matrix multiply
    A

    syrk(A, alpha=2., transpose=False) = 0.04]


    Defined in src/operator/tensor/la_op.cc:L484

    returns

    org.apache.mxnet.NDArray

  250. abstract def linalg_syrk(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Multiplication of matrix with its transpose.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the operator performs the BLAS3 function *syrk*:

    *out* = *alpha* \* *A* \* *A*\ :sup:T

    if *transpose=False*, or

    *out* = *alpha* \* *A*\ :sup:T \ \* *A*

    if *transpose=True*.

    If *n>2*, *syrk* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    ..

    Multiplication of matrix with its transpose.
    Input is a tensor *A* of dimension *n >= 2*.

    If *n=2*, the operator performs the BLAS3 function *syrk*:

    *out* = *alpha* \* *A* \* *A*\ :sup:T

    if *transpose=False*, or

    *out* = *alpha* \* *A*\ :sup:T \ \* *A*

    if *transpose=True*.

    If *n>2*, *syrk* is performed separately on the trailing two dimensions for all
    inputs (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix multiply
    A = 2., 3.], [4., 5., 6.
    syrk(A, alpha=1., transpose=False)

    32.],
    [32., 77.

    syrk(A, alpha

    22., 27.],
    [22., 29., 36.],
    [27., 36., 45.


    // Batch matrix multiply
    A

    syrk(A, alpha=2., transpose=False) = 0.04]


    Defined in src/operator/tensor/la_op.cc:L484

    returns

    org.apache.mxnet.NDArray

  251. abstract def linalg_trmm(args: Any*): NDArrayFuncReturn

    Performs multiplication with a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular.

    Performs multiplication with a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular. The operator performs the BLAS3 function
    *trmm*:

    *out* = *alpha* \* *op*\ (*A*) \* *B*

    if *rightside=False*, or

    *out* = *alpha* \* *B* \* *op*\ (*A*)

    if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the
    identity or the matrix transposition (depending on *transpose*).

    If *n>2*, *trmm* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.


    Examples::

    // Single triangular matrix multiply
    A = 0], [1.0, 1.0
    B = 1.0, 1.0], [1.0, 1.0, 1.0
    trmm(A, B, alpha=2.0) = 2.0, 2.0], [4.0, 4.0, 4.0

    // Batch triangular matrix multiply
    A = 0], [1.0, 1.0]], 0], [1.0, 1.0]
    B = 1.0, 1.0], [1.0, 1.0, 1.0]], 0.5, 0.5], [0.5, 0.5, 0.5]
    trmm(A, B, alpha=2.0) = 2.0, 2.0], [4.0, 4.0, 4.0]],
    1.0, 1.0], [2.0, 2.0, 2.0]


    Defined in src/operator/tensor/la_op.cc:L316

    returns

    org.apache.mxnet.NDArray

  252. abstract def linalg_trmm(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Performs multiplication with a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular.

    Performs multiplication with a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular. The operator performs the BLAS3 function
    *trmm*:

    *out* = *alpha* \* *op*\ (*A*) \* *B*

    if *rightside=False*, or

    *out* = *alpha* \* *B* \* *op*\ (*A*)

    if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the
    identity or the matrix transposition (depending on *transpose*).

    If *n>2*, *trmm* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.


    Examples::

    // Single triangular matrix multiply
    A = 0], [1.0, 1.0
    B = 1.0, 1.0], [1.0, 1.0, 1.0
    trmm(A, B, alpha=2.0) = 2.0, 2.0], [4.0, 4.0, 4.0

    // Batch triangular matrix multiply
    A = 0], [1.0, 1.0]], 0], [1.0, 1.0]
    B = 1.0, 1.0], [1.0, 1.0, 1.0]], 0.5, 0.5], [0.5, 0.5, 0.5]
    trmm(A, B, alpha=2.0) = 2.0, 2.0], [4.0, 4.0, 4.0]],
    1.0, 1.0], [2.0, 2.0, 2.0]


    Defined in src/operator/tensor/la_op.cc:L316

    returns

    org.apache.mxnet.NDArray

  253. abstract def linalg_trsm(args: Any*): NDArrayFuncReturn

    Solves matrix equation involving a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular.

    Solves matrix equation involving a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular. The operator performs the BLAS3 function
    *trsm*, solving for *out* in:

    *op*\ (*A*) \* *out* = *alpha* \* *B*

    if *rightside=False*, or

    *out* \* *op*\ (*A*) = *alpha* \* *B*

    if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the
    identity or the matrix transposition (depending on *transpose*).

    If *n>2*, *trsm* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix solve
    A = 0], [1.0, 1.0
    B = 2.0, 2.0], [4.0, 4.0, 4.0
    trsm(A, B, alpha=0.5) = 1.0, 1.0], [1.0, 1.0, 1.0

    // Batch matrix solve
    A = 0], [1.0, 1.0]], 0], [1.0, 1.0]
    B = 2.0, 2.0], [4.0, 4.0, 4.0]],
    4.0, 4.0], [8.0, 8.0, 8.0]
    trsm(A, B, alpha=0.5) = 1.0, 1.0], [1.0, 1.0, 1.0]],
    2.0, 2.0], [2.0, 2.0, 2.0]


    Defined in src/operator/tensor/la_op.cc:L379

    returns

    org.apache.mxnet.NDArray

  254. abstract def linalg_trsm(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Solves matrix equation involving a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular.

    Solves matrix equation involving a lower triangular matrix.
    Input are tensors *A*, *B*, each of dimension *n >= 2* and having the same shape
    on the leading *n-2* dimensions.

    If *n=2*, *A* must be lower triangular. The operator performs the BLAS3 function
    *trsm*, solving for *out* in:

    *op*\ (*A*) \* *out* = *alpha* \* *B*

    if *rightside=False*, or

    *out* \* *op*\ (*A*) = *alpha* \* *B*

    if *rightside=True*. Here, *alpha* is a scalar parameter, and *op()* is either the
    identity or the matrix transposition (depending on *transpose*).

    If *n>2*, *trsm* is performed separately on the trailing two dimensions for all inputs
    (batch mode).

    .. note:: The operator supports float32 and float64 data types only.

    Examples::

    // Single matrix solve
    A = 0], [1.0, 1.0
    B = 2.0, 2.0], [4.0, 4.0, 4.0
    trsm(A, B, alpha=0.5) = 1.0, 1.0], [1.0, 1.0, 1.0

    // Batch matrix solve
    A = 0], [1.0, 1.0]], 0], [1.0, 1.0]
    B = 2.0, 2.0], [4.0, 4.0, 4.0]],
    4.0, 4.0], [8.0, 8.0, 8.0]
    trsm(A, B, alpha=0.5) = 1.0, 1.0], [1.0, 1.0, 1.0]],
    2.0, 2.0], [2.0, 2.0, 2.0]


    Defined in src/operator/tensor/la_op.cc:L379

    returns

    org.apache.mxnet.NDArray

  255. abstract def log(args: Any*): NDArrayFuncReturn

    Returns element-wise Natural logarithmic value of the input.

    The natural logarithm is logarithm in base *e*, so that log(exp(x)) = x

    The storage type of log output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L851

    Returns element-wise Natural logarithmic value of the input.

    The natural logarithm is logarithm in base *e*, so that log(exp(x)) = x

    The storage type of log output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L851

    returns

    org.apache.mxnet.NDArray

  256. abstract def log(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise Natural logarithmic value of the input.

    The natural logarithm is logarithm in base *e*, so that log(exp(x)) = x

    The storage type of log output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L851

    Returns element-wise Natural logarithmic value of the input.

    The natural logarithm is logarithm in base *e*, so that log(exp(x)) = x

    The storage type of log output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L851

    returns

    org.apache.mxnet.NDArray

  257. abstract def log10(args: Any*): NDArrayFuncReturn

    Returns element-wise Base-10 logarithmic value of the input.

    10**log10(x) = x

    The storage type of log10 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L863

    Returns element-wise Base-10 logarithmic value of the input.

    10**log10(x) = x

    The storage type of log10 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L863

    returns

    org.apache.mxnet.NDArray

  258. abstract def log10(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise Base-10 logarithmic value of the input.

    10**log10(x) = x

    The storage type of log10 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L863

    Returns element-wise Base-10 logarithmic value of the input.

    10**log10(x) = x

    The storage type of log10 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L863

    returns

    org.apache.mxnet.NDArray

  259. abstract def log1p(args: Any*): NDArrayFuncReturn

    Returns element-wise log(1 + x) value of the input.

    This function is more accurate than log(1 + x) for small x so that
    :math:1+x\approx 1

    The storage type of log1p output depends upon the input storage type:

    Returns element-wise log(1 + x) value of the input.

    This function is more accurate than log(1 + x) for small x so that
    :math:1+x\approx 1

    The storage type of log1p output depends upon the input storage type:

    • log1p(default) = default
    • log1p(row_sparse) = row_sparse
    • log1p(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L900
    returns

    org.apache.mxnet.NDArray

  260. abstract def log1p(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise log(1 + x) value of the input.

    This function is more accurate than log(1 + x) for small x so that
    :math:1+x\approx 1

    The storage type of log1p output depends upon the input storage type:

    Returns element-wise log(1 + x) value of the input.

    This function is more accurate than log(1 + x) for small x so that
    :math:1+x\approx 1

    The storage type of log1p output depends upon the input storage type:

    • log1p(default) = default
    • log1p(row_sparse) = row_sparse
    • log1p(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L900
    returns

    org.apache.mxnet.NDArray

  261. abstract def log2(args: Any*): NDArrayFuncReturn

    Returns element-wise Base-2 logarithmic value of the input.

    2**log2(x) = x

    The storage type of log2 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L875

    Returns element-wise Base-2 logarithmic value of the input.

    2**log2(x) = x

    The storage type of log2 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L875

    returns

    org.apache.mxnet.NDArray

  262. abstract def log2(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise Base-2 logarithmic value of the input.

    2**log2(x) = x

    The storage type of log2 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L875

    Returns element-wise Base-2 logarithmic value of the input.

    2**log2(x) = x

    The storage type of log2 output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L875

    returns

    org.apache.mxnet.NDArray

  263. abstract def log_softmax(args: Any*): NDArrayFuncReturn

    Computes the log softmax of the input.
    This is equivalent to computing softmax followed by log.

    Examples::

    >>> x = mx.nd.array([1, 2, .1])
    >>> mx.nd.log_softmax(x).asnumpy()
    array([-1.41702998, -0.41702995, -2.31702995], dtype=float32)

    >>> x = mx.nd.array( 2, .1],[.1, 2, 1 )
    >>> mx.nd.log_softmax(x, axis=0).asnumpy()
    array(-0.69314718, -1.24115396],
    [-1.24115396, -0.69314718, -0.34115392
    , dtype=float32)

    Computes the log softmax of the input.
    This is equivalent to computing softmax followed by log.

    Examples::

    >>> x = mx.nd.array([1, 2, .1])
    >>> mx.nd.log_softmax(x).asnumpy()
    array([-1.41702998, -0.41702995, -2.31702995], dtype=float32)

    >>> x = mx.nd.array( 2, .1],[.1, 2, 1 )
    >>> mx.nd.log_softmax(x, axis=0).asnumpy()
    array(-0.69314718, -1.24115396],
    [-1.24115396, -0.69314718, -0.34115392
    , dtype=float32)

    returns

    org.apache.mxnet.NDArray

  264. abstract def log_softmax(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the log softmax of the input.
    This is equivalent to computing softmax followed by log.

    Examples::

    >>> x = mx.nd.array([1, 2, .1])
    >>> mx.nd.log_softmax(x).asnumpy()
    array([-1.41702998, -0.41702995, -2.31702995], dtype=float32)

    >>> x = mx.nd.array( 2, .1],[.1, 2, 1 )
    >>> mx.nd.log_softmax(x, axis=0).asnumpy()
    array(-0.69314718, -1.24115396],
    [-1.24115396, -0.69314718, -0.34115392
    , dtype=float32)

    Computes the log softmax of the input.
    This is equivalent to computing softmax followed by log.

    Examples::

    >>> x = mx.nd.array([1, 2, .1])
    >>> mx.nd.log_softmax(x).asnumpy()
    array([-1.41702998, -0.41702995, -2.31702995], dtype=float32)

    >>> x = mx.nd.array( 2, .1],[.1, 2, 1 )
    >>> mx.nd.log_softmax(x, axis=0).asnumpy()
    array(-0.69314718, -1.24115396],
    [-1.24115396, -0.69314718, -0.34115392
    , dtype=float32)

    returns

    org.apache.mxnet.NDArray

  265. abstract def logical_not(args: Any*): NDArrayFuncReturn

    Returns the result of logical NOT (!) function

    Example:
    logical_not([-2., 0., 1.]) = [0., 1., 0.]

    Returns the result of logical NOT (!) function

    Example:
    logical_not([-2., 0., 1.]) = [0., 1., 0.]

    returns

    org.apache.mxnet.NDArray

  266. abstract def logical_not(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the result of logical NOT (!) function

    Example:
    logical_not([-2., 0., 1.]) = [0., 1., 0.]

    Returns the result of logical NOT (!) function

    Example:
    logical_not([-2., 0., 1.]) = [0., 1., 0.]

    returns

    org.apache.mxnet.NDArray

  267. abstract def make_loss(args: Any*): NDArrayFuncReturn

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function.

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function. Assume out is the
    predicted output and label is the true label, then the cross entropy can be defined as::

    cross_entropy = label * log(out) + (1 - label) * log(1 - out)
    loss = make_loss(cross_entropy)

    We will need to use make_loss when we are creating our own loss function or we want to
    combine multiple loss functions. Also we may want to stop some variables' gradients
    from backpropagation. See more detail in BlockGrad or stop_gradient.

    The storage type of make_loss output depends upon the input storage type:

    • make_loss(default) = default
    • make_loss(row_sparse) = row_sparse



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L298
    returns

    org.apache.mxnet.NDArray

  268. abstract def make_loss(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function.

    Make your own loss function in network construction.

    This operator accepts a customized loss function symbol as a terminal loss and
    the symbol should be an operator with no backward dependency.
    The output of this function is the gradient of loss with respect to the input data.

    For example, if you are a making a cross entropy loss function. Assume out is the
    predicted output and label is the true label, then the cross entropy can be defined as::

    cross_entropy = label * log(out) + (1 - label) * log(1 - out)
    loss = make_loss(cross_entropy)

    We will need to use make_loss when we are creating our own loss function or we want to
    combine multiple loss functions. Also we may want to stop some variables' gradients
    from backpropagation. See more detail in BlockGrad or stop_gradient.

    The storage type of make_loss output depends upon the input storage type:

    • make_loss(default) = default
    • make_loss(row_sparse) = row_sparse



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L298
    returns

    org.apache.mxnet.NDArray

  269. abstract def max(args: Any*): NDArrayFuncReturn

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    returns

    org.apache.mxnet.NDArray

  270. abstract def max(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    returns

    org.apache.mxnet.NDArray

  271. abstract def max_axis(args: Any*): NDArrayFuncReturn

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    returns

    org.apache.mxnet.NDArray

  272. abstract def max_axis(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    Computes the max of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L190

    returns

    org.apache.mxnet.NDArray

  273. abstract def mean(args: Any*): NDArrayFuncReturn

    Computes the mean of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L131

    Computes the mean of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L131

    returns

    org.apache.mxnet.NDArray

  274. abstract def mean(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the mean of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L131

    Computes the mean of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L131

    returns

    org.apache.mxnet.NDArray

  275. abstract def min(args: Any*): NDArrayFuncReturn

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    returns

    org.apache.mxnet.NDArray

  276. abstract def min(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    returns

    org.apache.mxnet.NDArray

  277. abstract def min_axis(args: Any*): NDArrayFuncReturn

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    returns

    org.apache.mxnet.NDArray

  278. abstract def min_axis(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    Computes the min of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L204

    returns

    org.apache.mxnet.NDArray

  279. abstract def mp_sgd_mom_update(args: Any*): NDArrayFuncReturn

    Updater function for multi-precision sgd optimizer

    Updater function for multi-precision sgd optimizer

    returns

    org.apache.mxnet.NDArray

  280. abstract def mp_sgd_mom_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Updater function for multi-precision sgd optimizer

    Updater function for multi-precision sgd optimizer

    returns

    org.apache.mxnet.NDArray

  281. abstract def mp_sgd_update(args: Any*): NDArrayFuncReturn

    Updater function for multi-precision sgd optimizer

    Updater function for multi-precision sgd optimizer

    returns

    org.apache.mxnet.NDArray

  282. abstract def mp_sgd_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Updater function for multi-precision sgd optimizer

    Updater function for multi-precision sgd optimizer

    returns

    org.apache.mxnet.NDArray

  283. abstract def nanprod(args: Any*): NDArrayFuncReturn

    Computes the product of array elements over given axes treating Not a Numbers (NaN) as one.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L176

    Computes the product of array elements over given axes treating Not a Numbers (NaN) as one.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L176

    returns

    org.apache.mxnet.NDArray

  284. abstract def nanprod(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the product of array elements over given axes treating Not a Numbers (NaN) as one.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L176

    Computes the product of array elements over given axes treating Not a Numbers (NaN) as one.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L176

    returns

    org.apache.mxnet.NDArray

  285. abstract def nansum(args: Any*): NDArrayFuncReturn

    Computes the sum of array elements over given axes treating Not a Numbers (NaN) as zero.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L161

    Computes the sum of array elements over given axes treating Not a Numbers (NaN) as zero.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L161

    returns

    org.apache.mxnet.NDArray

  286. abstract def nansum(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the sum of array elements over given axes treating Not a Numbers (NaN) as zero.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L161

    Computes the sum of array elements over given axes treating Not a Numbers (NaN) as zero.



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L161

    returns

    org.apache.mxnet.NDArray

  287. abstract def negative(args: Any*): NDArrayFuncReturn

    Numerical negative of the argument, element-wise.

    The storage type of negative output depends upon the input storage type:

    Numerical negative of the argument, element-wise.

    The storage type of negative output depends upon the input storage type:

    • negative(default) = default
    • negative(row_sparse) = row_sparse
    • negative(csr) = csr
    returns

    org.apache.mxnet.NDArray

  288. abstract def negative(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Numerical negative of the argument, element-wise.

    The storage type of negative output depends upon the input storage type:

    Numerical negative of the argument, element-wise.

    The storage type of negative output depends upon the input storage type:

    • negative(default) = default
    • negative(row_sparse) = row_sparse
    • negative(csr) = csr
    returns

    org.apache.mxnet.NDArray

  289. abstract def norm(args: Any*): NDArrayFuncReturn

    Computes the norm on an NDArray.

    This operator computes the norm on an NDArray with the specified axis, depending
    on the value of the ord parameter.

    Computes the norm on an NDArray.

    This operator computes the norm on an NDArray with the specified axis, depending
    on the value of the ord parameter. By default, it computes the L2 norm on the entire
    array. Currently only ord=2 supports sparse ndarrays.

    Examples::

    x = 2],
    [3, 4]],
    2],
    [5, 6
    ]

    norm(x, ord=2, axis=1) = 4.472136 ]
    [5.3851647 6.3245554


    norm(x, ord=1, axis=1) = 6.],
    [7., 8.


    rsp = x.cast_storage('row_sparse')

    norm(rsp) = [5.47722578]

    csr = x.cast_storage('csr')

    norm(csr) = [5.47722578]



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L345

    returns

    org.apache.mxnet.NDArray

  290. abstract def norm(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the norm on an NDArray.

    This operator computes the norm on an NDArray with the specified axis, depending
    on the value of the ord parameter.

    Computes the norm on an NDArray.

    This operator computes the norm on an NDArray with the specified axis, depending
    on the value of the ord parameter. By default, it computes the L2 norm on the entire
    array. Currently only ord=2 supports sparse ndarrays.

    Examples::

    x = 2],
    [3, 4]],
    2],
    [5, 6
    ]

    norm(x, ord=2, axis=1) = 4.472136 ]
    [5.3851647 6.3245554


    norm(x, ord=1, axis=1) = 6.],
    [7., 8.


    rsp = x.cast_storage('row_sparse')

    norm(rsp) = [5.47722578]

    csr = x.cast_storage('csr')

    norm(csr) = [5.47722578]



    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L345

    returns

    org.apache.mxnet.NDArray

  291. abstract def normal(args: Any*): NDArrayFuncReturn

    Draw random samples from a normal (Gaussian) distribution.

    ..

    Draw random samples from a normal (Gaussian) distribution.

    .. note:: The existing alias normal is deprecated.

    Samples are distributed according to a normal distribution parametrized by *loc* (mean) and *scale* (standard deviation).

    Example::

    normal(loc=0, scale=1, shape=(2,2)) = 1.89171135, -1.16881478],
    [-1.23474145, 1.55807114



    Defined in src/operator/random/sample_op.cc:L85

    returns

    org.apache.mxnet.NDArray

  292. abstract def normal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from a normal (Gaussian) distribution.

    ..

    Draw random samples from a normal (Gaussian) distribution.

    .. note:: The existing alias normal is deprecated.

    Samples are distributed according to a normal distribution parametrized by *loc* (mean) and *scale* (standard deviation).

    Example::

    normal(loc=0, scale=1, shape=(2,2)) = 1.89171135, -1.16881478],
    [-1.23474145, 1.55807114



    Defined in src/operator/random/sample_op.cc:L85

    returns

    org.apache.mxnet.NDArray

  293. abstract def one_hot(args: Any*): NDArrayFuncReturn

    Returns a one-hot array.

    The locations represented by indices take value on_value, while all
    other locations take value off_value.

    one_hot operation with indices of shape (i0, i1) and depth of d would result
    in an output array of shape (i0, i1, d) with::

    output[i,j,:] = off_value
    output[i,j,indices[i,j]] = on_value

    Examples::

    one_hot([1,0,2,0], 3) = 0. 1. 0.]
    [ 1. 0. 0.]
    [ 0. 0. 1.]
    [ 1. 0. 0.


    one_hot([1,0,2,0], 3, on_value=8, off_value=1,
    dtype='int32') = 8 1]
    [8 1 1]
    [1 1 8]
    [8 1 1


    one_hot(1,0],[1,0],[2,0, 3) = 0. 1. 0.]
    [ 1. 0. 0.]]

    0. 1. 0.]
    [ 1. 0. 0.


    0. 0. 1.]
    [ 1. 0. 0.
    ]


    Defined in src/operator/tensor/indexing_op.cc:L508

    Returns a one-hot array.

    The locations represented by indices take value on_value, while all
    other locations take value off_value.

    one_hot operation with indices of shape (i0, i1) and depth of d would result
    in an output array of shape (i0, i1, d) with::

    output[i,j,:] = off_value
    output[i,j,indices[i,j]] = on_value

    Examples::

    one_hot([1,0,2,0], 3) = 0. 1. 0.]
    [ 1. 0. 0.]
    [ 0. 0. 1.]
    [ 1. 0. 0.


    one_hot([1,0,2,0], 3, on_value=8, off_value=1,
    dtype='int32') = 8 1]
    [8 1 1]
    [1 1 8]
    [8 1 1


    one_hot(1,0],[1,0],[2,0, 3) = 0. 1. 0.]
    [ 1. 0. 0.]]

    0. 1. 0.]
    [ 1. 0. 0.


    0. 0. 1.]
    [ 1. 0. 0.
    ]


    Defined in src/operator/tensor/indexing_op.cc:L508

    returns

    org.apache.mxnet.NDArray

  294. abstract def one_hot(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns a one-hot array.

    The locations represented by indices take value on_value, while all
    other locations take value off_value.

    one_hot operation with indices of shape (i0, i1) and depth of d would result
    in an output array of shape (i0, i1, d) with::

    output[i,j,:] = off_value
    output[i,j,indices[i,j]] = on_value

    Examples::

    one_hot([1,0,2,0], 3) = 0. 1. 0.]
    [ 1. 0. 0.]
    [ 0. 0. 1.]
    [ 1. 0. 0.


    one_hot([1,0,2,0], 3, on_value=8, off_value=1,
    dtype='int32') = 8 1]
    [8 1 1]
    [1 1 8]
    [8 1 1


    one_hot(1,0],[1,0],[2,0, 3) = 0. 1. 0.]
    [ 1. 0. 0.]]

    0. 1. 0.]
    [ 1. 0. 0.


    0. 0. 1.]
    [ 1. 0. 0.
    ]


    Defined in src/operator/tensor/indexing_op.cc:L508

    Returns a one-hot array.

    The locations represented by indices take value on_value, while all
    other locations take value off_value.

    one_hot operation with indices of shape (i0, i1) and depth of d would result
    in an output array of shape (i0, i1, d) with::

    output[i,j,:] = off_value
    output[i,j,indices[i,j]] = on_value

    Examples::

    one_hot([1,0,2,0], 3) = 0. 1. 0.]
    [ 1. 0. 0.]
    [ 0. 0. 1.]
    [ 1. 0. 0.


    one_hot([1,0,2,0], 3, on_value=8, off_value=1,
    dtype='int32') = 8 1]
    [8 1 1]
    [1 1 8]
    [8 1 1


    one_hot(1,0],[1,0],[2,0, 3) = 0. 1. 0.]
    [ 1. 0. 0.]]

    0. 1. 0.]
    [ 1. 0. 0.


    0. 0. 1.]
    [ 1. 0. 0.
    ]


    Defined in src/operator/tensor/indexing_op.cc:L508

    returns

    org.apache.mxnet.NDArray

  295. abstract def ones_like(args: Any*): NDArrayFuncReturn

    Return an array of ones with the same shape and type
    as the input array.

    Examples::

    x = 0., 0., 0.],
    [ 0., 0., 0.


    ones_like(x) = 1., 1., 1.],
    [ 1., 1., 1.

    Return an array of ones with the same shape and type
    as the input array.

    Examples::

    x = 0., 0., 0.],
    [ 0., 0., 0.


    ones_like(x) = 1., 1., 1.],
    [ 1., 1., 1.

    returns

    org.apache.mxnet.NDArray

  296. abstract def ones_like(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Return an array of ones with the same shape and type
    as the input array.

    Examples::

    x = 0., 0., 0.],
    [ 0., 0., 0.


    ones_like(x) = 1., 1., 1.],
    [ 1., 1., 1.

    Return an array of ones with the same shape and type
    as the input array.

    Examples::

    x = 0., 0., 0.],
    [ 0., 0., 0.


    ones_like(x) = 1., 1., 1.],
    [ 1., 1., 1.

    returns

    org.apache.mxnet.NDArray

  297. abstract def pad(args: Any*): NDArrayFuncReturn

    Pads an input array with a constant or edge values of the array.

    ..

    Pads an input array with a constant or edge values of the array.

    .. note:: Pad is deprecated. Use pad instead.

    .. note:: Current implementation only supports 4D and 5D input arrays with padding applied
    only on axes 1, 2 and 3. Expects axes 4 and 5 in pad_width to be zero.

    This operation pads an input array with either a constant_value or edge values
    along each axis of the input array. The amount of padding is specified by pad_width.

    pad_width is a tuple of integer padding widths for each axis of the format
    (before_1, after_1, ... , before_N, after_N). The pad_width should be of length 2*N
    where N is the number of dimensions of the array.

    For dimension N of the input array, before_N and after_N indicates how many values
    to add before and after the elements of the array along dimension N.
    The widths of the higher two dimensions before_1, after_1, before_2,
    after_2 must be 0.

    Example::

    x = 1. 2. 3.]
    [ 4. 5. 6.]]

    7. 8. 9.]
    [ 10. 11. 12.
    ]


    11. 12. 13.]
    [ 14. 15. 16.]]

    17. 18. 19.]
    [ 20. 21. 22.
    ]]

    pad(x,mode="edge", pad_width=(0,0,0,0,1,1,1,1)) =

    1. 1. 2. 3. 3.]
    [ 1. 1. 2. 3. 3.]
    [ 4. 4. 5. 6. 6.]
    [ 4. 4. 5. 6. 6.]]

    7. 7. 8. 9. 9.]
    [ 7. 7. 8. 9. 9.]
    [ 10. 10. 11. 12. 12.]
    [ 10. 10. 11. 12. 12.
    ]


    11. 11. 12. 13. 13.]
    [ 11. 11. 12. 13. 13.]
    [ 14. 14. 15. 16. 16.]
    [ 14. 14. 15. 16. 16.]]

    17. 17. 18. 19. 19.]
    [ 17. 17. 18. 19. 19.]
    [ 20. 20. 21. 22. 22.]
    [ 20. 20. 21. 22. 22.
    ]]

    pad(x, mode="constant", constant_value=0, pad_width=(0,0,0,0,1,1,1,1)) =

    0. 0. 0. 0. 0.]
    [ 0. 1. 2. 3. 0.]
    [ 0. 4. 5. 6. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 7. 8. 9. 0.]
    [ 0. 10. 11. 12. 0.]
    [ 0. 0. 0. 0. 0.
    ]


    0. 0. 0. 0. 0.]
    [ 0. 11. 12. 13. 0.]
    [ 0. 14. 15. 16. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 17. 18. 19. 0.]
    [ 0. 20. 21. 22. 0.]
    [ 0. 0. 0. 0. 0.
    ]]




    Defined in src/operator/pad.cc:L766

    returns

    org.apache.mxnet.NDArray

  298. abstract def pad(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Pads an input array with a constant or edge values of the array.

    ..

    Pads an input array with a constant or edge values of the array.

    .. note:: Pad is deprecated. Use pad instead.

    .. note:: Current implementation only supports 4D and 5D input arrays with padding applied
    only on axes 1, 2 and 3. Expects axes 4 and 5 in pad_width to be zero.

    This operation pads an input array with either a constant_value or edge values
    along each axis of the input array. The amount of padding is specified by pad_width.

    pad_width is a tuple of integer padding widths for each axis of the format
    (before_1, after_1, ... , before_N, after_N). The pad_width should be of length 2*N
    where N is the number of dimensions of the array.

    For dimension N of the input array, before_N and after_N indicates how many values
    to add before and after the elements of the array along dimension N.
    The widths of the higher two dimensions before_1, after_1, before_2,
    after_2 must be 0.

    Example::

    x = 1. 2. 3.]
    [ 4. 5. 6.]]

    7. 8. 9.]
    [ 10. 11. 12.
    ]


    11. 12. 13.]
    [ 14. 15. 16.]]

    17. 18. 19.]
    [ 20. 21. 22.
    ]]

    pad(x,mode="edge", pad_width=(0,0,0,0,1,1,1,1)) =

    1. 1. 2. 3. 3.]
    [ 1. 1. 2. 3. 3.]
    [ 4. 4. 5. 6. 6.]
    [ 4. 4. 5. 6. 6.]]

    7. 7. 8. 9. 9.]
    [ 7. 7. 8. 9. 9.]
    [ 10. 10. 11. 12. 12.]
    [ 10. 10. 11. 12. 12.
    ]


    11. 11. 12. 13. 13.]
    [ 11. 11. 12. 13. 13.]
    [ 14. 14. 15. 16. 16.]
    [ 14. 14. 15. 16. 16.]]

    17. 17. 18. 19. 19.]
    [ 17. 17. 18. 19. 19.]
    [ 20. 20. 21. 22. 22.]
    [ 20. 20. 21. 22. 22.
    ]]

    pad(x, mode="constant", constant_value=0, pad_width=(0,0,0,0,1,1,1,1)) =

    0. 0. 0. 0. 0.]
    [ 0. 1. 2. 3. 0.]
    [ 0. 4. 5. 6. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 7. 8. 9. 0.]
    [ 0. 10. 11. 12. 0.]
    [ 0. 0. 0. 0. 0.
    ]


    0. 0. 0. 0. 0.]
    [ 0. 11. 12. 13. 0.]
    [ 0. 14. 15. 16. 0.]
    [ 0. 0. 0. 0. 0.]]

    0. 0. 0. 0. 0.]
    [ 0. 17. 18. 19. 0.]
    [ 0. 20. 21. 22. 0.]
    [ 0. 0. 0. 0. 0.
    ]]




    Defined in src/operator/pad.cc:L766

    returns

    org.apache.mxnet.NDArray

  299. abstract def pick(args: Any*): NDArrayFuncReturn

    Picks elements from an input array according to the input indices along the given axis.

    Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be
    an output array of shape (i0,) with::

    output[i] = input[i, indices[i]]

    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last element along an axis (the clip mode).

    This function supports n-dimensional input and (n-1)-dimensional indices arrays.

    Examples::

    x = 1., 2.],
    [ 3., 4.],
    [ 5., 6.


    // picks elements with specified indices along axis 0
    pick(x, y=[0,1], 0) = [ 1., 4.]

    // picks elements with specified indices along axis 1
    pick(x, y=[0,1,0], 1) = [ 1., 4., 5.]

    y = 1.],
    [ 0.],
    [ 2.


    // picks elements with specified indices along axis 1 and dims are maintained
    pick(x,y, 1, keepdims=True) = 2.],
    [ 3.],
    [ 6.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L145

    Picks elements from an input array according to the input indices along the given axis.

    Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be
    an output array of shape (i0,) with::

    output[i] = input[i, indices[i]]

    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last element along an axis (the clip mode).

    This function supports n-dimensional input and (n-1)-dimensional indices arrays.

    Examples::

    x = 1., 2.],
    [ 3., 4.],
    [ 5., 6.


    // picks elements with specified indices along axis 0
    pick(x, y=[0,1], 0) = [ 1., 4.]

    // picks elements with specified indices along axis 1
    pick(x, y=[0,1,0], 1) = [ 1., 4., 5.]

    y = 1.],
    [ 0.],
    [ 2.


    // picks elements with specified indices along axis 1 and dims are maintained
    pick(x,y, 1, keepdims=True) = 2.],
    [ 3.],
    [ 6.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L145

    returns

    org.apache.mxnet.NDArray

  300. abstract def pick(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Picks elements from an input array according to the input indices along the given axis.

    Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be
    an output array of shape (i0,) with::

    output[i] = input[i, indices[i]]

    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last element along an axis (the clip mode).

    This function supports n-dimensional input and (n-1)-dimensional indices arrays.

    Examples::

    x = 1., 2.],
    [ 3., 4.],
    [ 5., 6.


    // picks elements with specified indices along axis 0
    pick(x, y=[0,1], 0) = [ 1., 4.]

    // picks elements with specified indices along axis 1
    pick(x, y=[0,1,0], 1) = [ 1., 4., 5.]

    y = 1.],
    [ 0.],
    [ 2.


    // picks elements with specified indices along axis 1 and dims are maintained
    pick(x,y, 1, keepdims=True) = 2.],
    [ 3.],
    [ 6.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L145

    Picks elements from an input array according to the input indices along the given axis.

    Given an input array of shape (d0, d1) and indices of shape (i0,), the result will be
    an output array of shape (i0,) with::

    output[i] = input[i, indices[i]]

    By default, if any index mentioned is too large, it is replaced by the index that addresses
    the last element along an axis (the clip mode).

    This function supports n-dimensional input and (n-1)-dimensional indices arrays.

    Examples::

    x = 1., 2.],
    [ 3., 4.],
    [ 5., 6.


    // picks elements with specified indices along axis 0
    pick(x, y=[0,1], 0) = [ 1., 4.]

    // picks elements with specified indices along axis 1
    pick(x, y=[0,1,0], 1) = [ 1., 4., 5.]

    y = 1.],
    [ 0.],
    [ 2.


    // picks elements with specified indices along axis 1 and dims are maintained
    pick(x,y, 1, keepdims=True) = 2.],
    [ 3.],
    [ 6.




    Defined in src/operator/tensor/broadcast_reduce_op_index.cc:L145

    returns

    org.apache.mxnet.NDArray

  301. abstract def prod(args: Any*): NDArrayFuncReturn

    Computes the product of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L146

    Computes the product of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L146

    returns

    org.apache.mxnet.NDArray

  302. abstract def prod(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the product of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L146

    Computes the product of array elements over given axes.

    Defined in src/operator/tensor/broadcast_reduce_op_value.cc:L146

    returns

    org.apache.mxnet.NDArray

  303. abstract def radians(args: Any*): NDArrayFuncReturn

    Converts each element of the input array from degrees to radians.

    ..

    Converts each element of the input array from degrees to radians.

    .. math::
    radians([0, 90, 180, 270, 360]) = [0, \pi/2, \pi, 3\pi/2, 2\pi]

    The storage type of radians output depends upon the input storage type:

    • radians(default) = default
    • radians(row_sparse) = row_sparse
    • radians(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L182
    returns

    org.apache.mxnet.NDArray

  304. abstract def radians(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Converts each element of the input array from degrees to radians.

    ..

    Converts each element of the input array from degrees to radians.

    .. math::
    radians([0, 90, 180, 270, 360]) = [0, \pi/2, \pi, 3\pi/2, 2\pi]

    The storage type of radians output depends upon the input storage type:

    • radians(default) = default
    • radians(row_sparse) = row_sparse
    • radians(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L182
    returns

    org.apache.mxnet.NDArray

  305. abstract def random_exponential(args: Any*): NDArrayFuncReturn

    Draw random samples from an exponential distribution.

    Samples are distributed according to an exponential distribution parametrized by *lambda* (rate).

    Example::

    exponential(lam=4, shape=(2,2)) = 0.0097189 , 0.08999364],
    [ 0.04146638, 0.31715935



    Defined in src/operator/random/sample_op.cc:L115

    Draw random samples from an exponential distribution.

    Samples are distributed according to an exponential distribution parametrized by *lambda* (rate).

    Example::

    exponential(lam=4, shape=(2,2)) = 0.0097189 , 0.08999364],
    [ 0.04146638, 0.31715935



    Defined in src/operator/random/sample_op.cc:L115

    returns

    org.apache.mxnet.NDArray

  306. abstract def random_exponential(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from an exponential distribution.

    Samples are distributed according to an exponential distribution parametrized by *lambda* (rate).

    Example::

    exponential(lam=4, shape=(2,2)) = 0.0097189 , 0.08999364],
    [ 0.04146638, 0.31715935



    Defined in src/operator/random/sample_op.cc:L115

    Draw random samples from an exponential distribution.

    Samples are distributed according to an exponential distribution parametrized by *lambda* (rate).

    Example::

    exponential(lam=4, shape=(2,2)) = 0.0097189 , 0.08999364],
    [ 0.04146638, 0.31715935



    Defined in src/operator/random/sample_op.cc:L115

    returns

    org.apache.mxnet.NDArray

  307. abstract def random_gamma(args: Any*): NDArrayFuncReturn

    Draw random samples from a gamma distribution.

    Samples are distributed according to a gamma distribution parametrized by *alpha* (shape) and *beta* (scale).

    Example::

    gamma(alpha=9, beta=0.5, shape=(2,2)) = 7.10486984, 3.37695289],
    [ 3.91697288, 3.65933681



    Defined in src/operator/random/sample_op.cc:L100

    Draw random samples from a gamma distribution.

    Samples are distributed according to a gamma distribution parametrized by *alpha* (shape) and *beta* (scale).

    Example::

    gamma(alpha=9, beta=0.5, shape=(2,2)) = 7.10486984, 3.37695289],
    [ 3.91697288, 3.65933681



    Defined in src/operator/random/sample_op.cc:L100

    returns

    org.apache.mxnet.NDArray

  308. abstract def random_gamma(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from a gamma distribution.

    Samples are distributed according to a gamma distribution parametrized by *alpha* (shape) and *beta* (scale).

    Example::

    gamma(alpha=9, beta=0.5, shape=(2,2)) = 7.10486984, 3.37695289],
    [ 3.91697288, 3.65933681



    Defined in src/operator/random/sample_op.cc:L100

    Draw random samples from a gamma distribution.

    Samples are distributed according to a gamma distribution parametrized by *alpha* (shape) and *beta* (scale).

    Example::

    gamma(alpha=9, beta=0.5, shape=(2,2)) = 7.10486984, 3.37695289],
    [ 3.91697288, 3.65933681



    Defined in src/operator/random/sample_op.cc:L100

    returns

    org.apache.mxnet.NDArray

  309. abstract def random_generalized_negative_binomial(args: Any*): NDArrayFuncReturn

    Draw random samples from a generalized negative binomial distribution.

    Samples are distributed according to a generalized negative binomial distribution parametrized by
    *mu* (mean) and *alpha* (dispersion).

    Draw random samples from a generalized negative binomial distribution.

    Samples are distributed according to a generalized negative binomial distribution parametrized by
    *mu* (mean) and *alpha* (dispersion). *alpha* is defined as *1/k* where *k* is the failure limit of the
    number of unsuccessful experiments (generalized to real numbers).
    Samples will always be returned as a floating point data type.

    Example::

    generalized_negative_binomial(mu=2.0, alpha=0.3, shape=(2,2)) = 2., 1.],
    [ 6., 4.



    Defined in src/operator/random/sample_op.cc:L168

    returns

    org.apache.mxnet.NDArray

  310. abstract def random_generalized_negative_binomial(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from a generalized negative binomial distribution.

    Samples are distributed according to a generalized negative binomial distribution parametrized by
    *mu* (mean) and *alpha* (dispersion).

    Draw random samples from a generalized negative binomial distribution.

    Samples are distributed according to a generalized negative binomial distribution parametrized by
    *mu* (mean) and *alpha* (dispersion). *alpha* is defined as *1/k* where *k* is the failure limit of the
    number of unsuccessful experiments (generalized to real numbers).
    Samples will always be returned as a floating point data type.

    Example::

    generalized_negative_binomial(mu=2.0, alpha=0.3, shape=(2,2)) = 2., 1.],
    [ 6., 4.



    Defined in src/operator/random/sample_op.cc:L168

    returns

    org.apache.mxnet.NDArray

  311. abstract def random_negative_binomial(args: Any*): NDArrayFuncReturn

    Draw random samples from a negative binomial distribution.

    Samples are distributed according to a negative binomial distribution parametrized by
    *k* (limit of unsuccessful experiments) and *p* (failure probability in each experiment).
    Samples will always be returned as a floating point data type.

    Example::

    negative_binomial(k=3, p=0.4, shape=(2,2)) = 4., 7.],
    [ 2., 5.



    Defined in src/operator/random/sample_op.cc:L149

    Draw random samples from a negative binomial distribution.

    Samples are distributed according to a negative binomial distribution parametrized by
    *k* (limit of unsuccessful experiments) and *p* (failure probability in each experiment).
    Samples will always be returned as a floating point data type.

    Example::

    negative_binomial(k=3, p=0.4, shape=(2,2)) = 4., 7.],
    [ 2., 5.



    Defined in src/operator/random/sample_op.cc:L149

    returns

    org.apache.mxnet.NDArray

  312. abstract def random_negative_binomial(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from a negative binomial distribution.

    Samples are distributed according to a negative binomial distribution parametrized by
    *k* (limit of unsuccessful experiments) and *p* (failure probability in each experiment).
    Samples will always be returned as a floating point data type.

    Example::

    negative_binomial(k=3, p=0.4, shape=(2,2)) = 4., 7.],
    [ 2., 5.



    Defined in src/operator/random/sample_op.cc:L149

    Draw random samples from a negative binomial distribution.

    Samples are distributed according to a negative binomial distribution parametrized by
    *k* (limit of unsuccessful experiments) and *p* (failure probability in each experiment).
    Samples will always be returned as a floating point data type.

    Example::

    negative_binomial(k=3, p=0.4, shape=(2,2)) = 4., 7.],
    [ 2., 5.



    Defined in src/operator/random/sample_op.cc:L149

    returns

    org.apache.mxnet.NDArray

  313. abstract def random_normal(args: Any*): NDArrayFuncReturn

    Draw random samples from a normal (Gaussian) distribution.

    ..

    Draw random samples from a normal (Gaussian) distribution.

    .. note:: The existing alias normal is deprecated.

    Samples are distributed according to a normal distribution parametrized by *loc* (mean) and *scale* (standard deviation).

    Example::

    normal(loc=0, scale=1, shape=(2,2)) = 1.89171135, -1.16881478],
    [-1.23474145, 1.55807114



    Defined in src/operator/random/sample_op.cc:L85

    returns

    org.apache.mxnet.NDArray

  314. abstract def random_normal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from a normal (Gaussian) distribution.

    ..

    Draw random samples from a normal (Gaussian) distribution.

    .. note:: The existing alias normal is deprecated.

    Samples are distributed according to a normal distribution parametrized by *loc* (mean) and *scale* (standard deviation).

    Example::

    normal(loc=0, scale=1, shape=(2,2)) = 1.89171135, -1.16881478],
    [-1.23474145, 1.55807114



    Defined in src/operator/random/sample_op.cc:L85

    returns

    org.apache.mxnet.NDArray

  315. abstract def random_poisson(args: Any*): NDArrayFuncReturn

    Draw random samples from a Poisson distribution.

    Samples are distributed according to a Poisson distribution parametrized by *lambda* (rate).
    Samples will always be returned as a floating point data type.

    Example::

    poisson(lam=4, shape=(2,2)) = 5., 2.],
    [ 4., 6.



    Defined in src/operator/random/sample_op.cc:L132

    Draw random samples from a Poisson distribution.

    Samples are distributed according to a Poisson distribution parametrized by *lambda* (rate).
    Samples will always be returned as a floating point data type.

    Example::

    poisson(lam=4, shape=(2,2)) = 5., 2.],
    [ 4., 6.



    Defined in src/operator/random/sample_op.cc:L132

    returns

    org.apache.mxnet.NDArray

  316. abstract def random_poisson(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from a Poisson distribution.

    Samples are distributed according to a Poisson distribution parametrized by *lambda* (rate).
    Samples will always be returned as a floating point data type.

    Example::

    poisson(lam=4, shape=(2,2)) = 5., 2.],
    [ 4., 6.



    Defined in src/operator/random/sample_op.cc:L132

    Draw random samples from a Poisson distribution.

    Samples are distributed according to a Poisson distribution parametrized by *lambda* (rate).
    Samples will always be returned as a floating point data type.

    Example::

    poisson(lam=4, shape=(2,2)) = 5., 2.],
    [ 4., 6.



    Defined in src/operator/random/sample_op.cc:L132

    returns

    org.apache.mxnet.NDArray

  317. abstract def random_uniform(args: Any*): NDArrayFuncReturn

    Draw random samples from a uniform distribution.

    ..

    Draw random samples from a uniform distribution.

    .. note:: The existing alias uniform is deprecated.

    Samples are uniformly distributed over the half-open interval *[low, high)*
    (includes *low*, but excludes *high*).

    Example::

    uniform(low=0, high=1, shape=(2,2)) = 0.60276335, 0.85794562],
    [ 0.54488319, 0.84725171




    Defined in src/operator/random/sample_op.cc:L66

    returns

    org.apache.mxnet.NDArray

  318. abstract def random_uniform(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Draw random samples from a uniform distribution.

    ..

    Draw random samples from a uniform distribution.

    .. note:: The existing alias uniform is deprecated.

    Samples are uniformly distributed over the half-open interval *[low, high)*
    (includes *low*, but excludes *high*).

    Example::

    uniform(low=0, high=1, shape=(2,2)) = 0.60276335, 0.85794562],
    [ 0.54488319, 0.84725171




    Defined in src/operator/random/sample_op.cc:L66

    returns

    org.apache.mxnet.NDArray

  319. abstract def ravel_multi_index(args: Any*): NDArrayFuncReturn

    Converts a batch of index arrays into an array of flat indices.

    Converts a batch of index arrays into an array of flat indices. The operator follows numpy conventions so a single multi index is given by a column of the input matrix.

    Examples::

    A = 3,6,6],[4,5,1
    ravel(A, shape=(7,6)) = [22,41,37]



    Defined in src/operator/tensor/ravel.cc:L41

    returns

    org.apache.mxnet.NDArray

  320. abstract def ravel_multi_index(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Converts a batch of index arrays into an array of flat indices.

    Converts a batch of index arrays into an array of flat indices. The operator follows numpy conventions so a single multi index is given by a column of the input matrix.

    Examples::

    A = 3,6,6],[4,5,1
    ravel(A, shape=(7,6)) = [22,41,37]



    Defined in src/operator/tensor/ravel.cc:L41

    returns

    org.apache.mxnet.NDArray

  321. abstract def rcbrt(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse cube-root value of the input.

    ..

    Returns element-wise inverse cube-root value of the input.

    .. math::
    rcbrt(x) = 1/\sqrt[3]{x}

    Example::

    rcbrt([1,8,-125]) = [1.0, 0.5, -0.2]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L816

    returns

    org.apache.mxnet.NDArray

  322. abstract def rcbrt(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse cube-root value of the input.

    ..

    Returns element-wise inverse cube-root value of the input.

    .. math::
    rcbrt(x) = 1/\sqrt[3]{x}

    Example::

    rcbrt([1,8,-125]) = [1.0, 0.5, -0.2]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L816

    returns

    org.apache.mxnet.NDArray

  323. abstract def reciprocal(args: Any*): NDArrayFuncReturn

    Returns the reciprocal of the argument, element-wise.

    Calculates 1/x.

    Example::

    reciprocal([-2, 1, 3, 1.6, 0.2]) = [-0.5, 1.0, 0.33333334, 0.625, 5.0]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L556

    Returns the reciprocal of the argument, element-wise.

    Calculates 1/x.

    Example::

    reciprocal([-2, 1, 3, 1.6, 0.2]) = [-0.5, 1.0, 0.33333334, 0.625, 5.0]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L556

    returns

    org.apache.mxnet.NDArray

  324. abstract def reciprocal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the reciprocal of the argument, element-wise.

    Calculates 1/x.

    Example::

    reciprocal([-2, 1, 3, 1.6, 0.2]) = [-0.5, 1.0, 0.33333334, 0.625, 5.0]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L556

    Returns the reciprocal of the argument, element-wise.

    Calculates 1/x.

    Example::

    reciprocal([-2, 1, 3, 1.6, 0.2]) = [-0.5, 1.0, 0.33333334, 0.625, 5.0]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L556

    returns

    org.apache.mxnet.NDArray

  325. abstract def relu(args: Any*): NDArrayFuncReturn

    Computes rectified linear.

    ..

    Computes rectified linear.

    .. math::
    max(features, 0)

    The storage type of relu output depends upon the input storage type:

    • relu(default) = default
    • relu(row_sparse) = row_sparse
    • relu(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L85
    returns

    org.apache.mxnet.NDArray

  326. abstract def relu(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes rectified linear.

    ..

    Computes rectified linear.

    .. math::
    max(features, 0)

    The storage type of relu output depends upon the input storage type:

    • relu(default) = default
    • relu(row_sparse) = row_sparse
    • relu(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L85
    returns

    org.apache.mxnet.NDArray

  327. abstract def repeat(args: Any*): NDArrayFuncReturn

    Repeats elements of an array.

    By default, repeat flattens the input array into 1-D and then repeats the
    elements::

    x = 1, 2],
    [ 3, 4


    repeat(x, repeats=2) = [ 1., 1., 2., 2., 3., 3., 4., 4.]

    The parameter axis specifies the axis along which to perform repeat::

    repeat(x, repeats=2, axis=1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.


    repeat(x, repeats=2, axis=0) = 1., 2.],
    [ 1., 2.],
    [ 3., 4.],
    [ 3., 4.


    repeat(x, repeats=2, axis=-1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.




    Defined in src/operator/tensor/matrix_op.cc:L690

    Repeats elements of an array.

    By default, repeat flattens the input array into 1-D and then repeats the
    elements::

    x = 1, 2],
    [ 3, 4


    repeat(x, repeats=2) = [ 1., 1., 2., 2., 3., 3., 4., 4.]

    The parameter axis specifies the axis along which to perform repeat::

    repeat(x, repeats=2, axis=1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.


    repeat(x, repeats=2, axis=0) = 1., 2.],
    [ 1., 2.],
    [ 3., 4.],
    [ 3., 4.


    repeat(x, repeats=2, axis=-1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.




    Defined in src/operator/tensor/matrix_op.cc:L690

    returns

    org.apache.mxnet.NDArray

  328. abstract def repeat(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Repeats elements of an array.

    By default, repeat flattens the input array into 1-D and then repeats the
    elements::

    x = 1, 2],
    [ 3, 4


    repeat(x, repeats=2) = [ 1., 1., 2., 2., 3., 3., 4., 4.]

    The parameter axis specifies the axis along which to perform repeat::

    repeat(x, repeats=2, axis=1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.


    repeat(x, repeats=2, axis=0) = 1., 2.],
    [ 1., 2.],
    [ 3., 4.],
    [ 3., 4.


    repeat(x, repeats=2, axis=-1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.




    Defined in src/operator/tensor/matrix_op.cc:L690

    Repeats elements of an array.

    By default, repeat flattens the input array into 1-D and then repeats the
    elements::

    x = 1, 2],
    [ 3, 4


    repeat(x, repeats=2) = [ 1., 1., 2., 2., 3., 3., 4., 4.]

    The parameter axis specifies the axis along which to perform repeat::

    repeat(x, repeats=2, axis=1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.


    repeat(x, repeats=2, axis=0) = 1., 2.],
    [ 1., 2.],
    [ 3., 4.],
    [ 3., 4.


    repeat(x, repeats=2, axis=-1) = 1., 1., 2., 2.],
    [ 3., 3., 4., 4.




    Defined in src/operator/tensor/matrix_op.cc:L690

    returns

    org.apache.mxnet.NDArray

  329. abstract def reshape(args: Any*): NDArrayFuncReturn

    Reshapes the input array.

    ..

    Reshapes the input array.

    .. note:: Reshape is deprecated, use reshape

    Given an array and a shape, this function returns a copy of the array in the new shape.
    The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array.

    Example::

    reshape([1,2,3,4], shape=(2,2)) = [3,4

    Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below:

    - 0 copy this dimension from the input to the output shape.

    Example::

    • input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2)
    • input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4)

      - -1 infers the dimension of the output shape by using the remainder of the input dimensions
      keeping the size of the new array same as that of the input array.
      At most one dimension of shape can be -1.

      Example::

    • input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4)
    • input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8)
    • input shape = (2,3,4), shape=(-1,), output shape = (24,)

      - -2 copy all/remainder of the input dimensions to the output shape.

      Example::

    • input shape = (2,3,4), shape = (-2,), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1)

      - -3 use the product of two consecutive dimensions of the input shape as the output dimension.

      Example::

    • input shape = (2,3,4), shape = (-3,4), output shape = (6,4)
    • input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20)
    • input shape = (2,3,4), shape = (0,-3), output shape = (2,12)
    • input shape = (2,3,4), shape = (-3,-2), output shape = (6,4)

      - -4 split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1).

      Example::

    • input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4)
    • input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4)

      If the argument reverse is set to 1, then the special values are inferred from right to left.

      Example::

    • without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5)
    • with reverse=1, output shape will be (50,4).



      Defined in src/operator/tensor/matrix_op.cc:L168
    returns

    org.apache.mxnet.NDArray

  330. abstract def reshape(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Reshapes the input array.

    ..

    Reshapes the input array.

    .. note:: Reshape is deprecated, use reshape

    Given an array and a shape, this function returns a copy of the array in the new shape.
    The shape is a tuple of integers such as (2,3,4). The size of the new shape should be same as the size of the input array.

    Example::

    reshape([1,2,3,4], shape=(2,2)) = [3,4

    Some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below:

    - 0 copy this dimension from the input to the output shape.

    Example::

    • input shape = (2,3,4), shape = (4,0,2), output shape = (4,3,2)
    • input shape = (2,3,4), shape = (2,0,0), output shape = (2,3,4)

      - -1 infers the dimension of the output shape by using the remainder of the input dimensions
      keeping the size of the new array same as that of the input array.
      At most one dimension of shape can be -1.

      Example::

    • input shape = (2,3,4), shape = (6,1,-1), output shape = (6,1,4)
    • input shape = (2,3,4), shape = (3,-1,8), output shape = (3,1,8)
    • input shape = (2,3,4), shape=(-1,), output shape = (24,)

      - -2 copy all/remainder of the input dimensions to the output shape.

      Example::

    • input shape = (2,3,4), shape = (-2,), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (2,-2), output shape = (2,3,4)
    • input shape = (2,3,4), shape = (-2,1,1), output shape = (2,3,4,1,1)

      - -3 use the product of two consecutive dimensions of the input shape as the output dimension.

      Example::

    • input shape = (2,3,4), shape = (-3,4), output shape = (6,4)
    • input shape = (2,3,4,5), shape = (-3,-3), output shape = (6,20)
    • input shape = (2,3,4), shape = (0,-3), output shape = (2,12)
    • input shape = (2,3,4), shape = (-3,-2), output shape = (6,4)

      - -4 split one dimension of the input into two dimensions passed subsequent to -4 in shape (can contain -1).

      Example::

    • input shape = (2,3,4), shape = (-4,1,2,-2), output shape =(1,2,3,4)
    • input shape = (2,3,4), shape = (2,-4,-1,3,-2), output shape = (2,1,3,4)

      If the argument reverse is set to 1, then the special values are inferred from right to left.

      Example::

    • without reverse=1, for input shape = (10,5,4), shape = (-1,0), output shape would be (40,5)
    • with reverse=1, output shape will be (50,4).



      Defined in src/operator/tensor/matrix_op.cc:L168
    returns

    org.apache.mxnet.NDArray

  331. abstract def reshape_like(args: Any*): NDArrayFuncReturn

    Reshape lhs to have the same shape as rhs.

    Reshape lhs to have the same shape as rhs.

    returns

    org.apache.mxnet.NDArray

  332. abstract def reshape_like(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Reshape lhs to have the same shape as rhs.

    Reshape lhs to have the same shape as rhs.

    returns

    org.apache.mxnet.NDArray

  333. abstract def reverse(args: Any*): NDArrayFuncReturn

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent.

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent. We use reverse in the following examples.

    Examples::

    x = 0., 1., 2., 3., 4.],
    [ 5., 6., 7., 8., 9.


    reverse(x, axis=0) = 5., 6., 7., 8., 9.],
    [ 0., 1., 2., 3., 4.


    reverse(x, axis=1) = 4., 3., 2., 1., 0.],
    [ 9., 8., 7., 6., 5.



    Defined in src/operator/tensor/matrix_op.cc:L792

    returns

    org.apache.mxnet.NDArray

  334. abstract def reverse(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent.

    Reverses the order of elements along given axis while preserving array shape.

    Note: reverse and flip are equivalent. We use reverse in the following examples.

    Examples::

    x = 0., 1., 2., 3., 4.],
    [ 5., 6., 7., 8., 9.


    reverse(x, axis=0) = 5., 6., 7., 8., 9.],
    [ 0., 1., 2., 3., 4.


    reverse(x, axis=1) = 4., 3., 2., 1., 0.],
    [ 9., 8., 7., 6., 5.



    Defined in src/operator/tensor/matrix_op.cc:L792

    returns

    org.apache.mxnet.NDArray

  335. abstract def rint(args: Any*): NDArrayFuncReturn

    Returns element-wise rounded value to the nearest integer of the input.

    ..

    Returns element-wise rounded value to the nearest integer of the input.

    .. note::

    • For input n.5 rint returns n while round returns n+1.
    • For input -n.5 both rint and round returns -n-1.

      Example::

      rint([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 1., -2., 2., 2.]

      The storage type of rint output depends upon the input storage type:

    • rint(default) = default
    • rint(row_sparse) = row_sparse
    • rint(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L637
    returns

    org.apache.mxnet.NDArray

  336. abstract def rint(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise rounded value to the nearest integer of the input.

    ..

    Returns element-wise rounded value to the nearest integer of the input.

    .. note::

    • For input n.5 rint returns n while round returns n+1.
    • For input -n.5 both rint and round returns -n-1.

      Example::

      rint([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 1., -2., 2., 2.]

      The storage type of rint output depends upon the input storage type:

    • rint(default) = default
    • rint(row_sparse) = row_sparse
    • rint(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L637
    returns

    org.apache.mxnet.NDArray

  337. abstract def rmsprop_update(args: Any*): NDArrayFuncReturn

    Update function for RMSProp optimizer.

    RMSprop is a variant of stochastic gradient descent where the gradients are
    divided by a cache which grows with the sum of squares of recent gradients?

    RMSProp is similar to AdaGrad, a popular variant of SGD which adaptively
    tunes the learning rate of each parameter.

    Update function for RMSProp optimizer.

    RMSprop is a variant of stochastic gradient descent where the gradients are
    divided by a cache which grows with the sum of squares of recent gradients?

    RMSProp is similar to AdaGrad, a popular variant of SGD which adaptively
    tunes the learning rate of each parameter. AdaGrad lowers the learning rate for
    each parameter monotonically over the course of training.
    While this is analytically motivated for convex optimizations, it may not be ideal
    for non-convex problems. RMSProp deals with this heuristically by allowing the
    learning rates to rebound as the denominator decays over time.

    Define the Root Mean Square (RMS) error criterion of the gradient as
    :math:RMS[g]_t = \sqrt{E[g2]_t + \epsilon}, where :math:g represents
    gradient and :math:
    E[g
    2]_t
    is the decaying average over past squared gradient.

    The :math:E[g^2]_t is given by:

    .. math::
    E[g2]_t = \gamma * E[g2]_{t-1} + (1-\gamma) * g_t^2

    The update step is

    .. math::
    \theta_{t+1} = \theta_t - \frac{\eta}{RMS[g]_t} g_t

    The RMSProp code follows the version in
    http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
    Tieleman & Hinton, 2012.

    Hinton suggests the momentum term :math:\gamma to be 0.9 and the learning rate
    :math:\eta to be 0.001.



    Defined in src/operator/optimizer_op.cc:L553

    returns

    org.apache.mxnet.NDArray

  338. abstract def rmsprop_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Update function for RMSProp optimizer.

    RMSprop is a variant of stochastic gradient descent where the gradients are
    divided by a cache which grows with the sum of squares of recent gradients?

    RMSProp is similar to AdaGrad, a popular variant of SGD which adaptively
    tunes the learning rate of each parameter.

    Update function for RMSProp optimizer.

    RMSprop is a variant of stochastic gradient descent where the gradients are
    divided by a cache which grows with the sum of squares of recent gradients?

    RMSProp is similar to AdaGrad, a popular variant of SGD which adaptively
    tunes the learning rate of each parameter. AdaGrad lowers the learning rate for
    each parameter monotonically over the course of training.
    While this is analytically motivated for convex optimizations, it may not be ideal
    for non-convex problems. RMSProp deals with this heuristically by allowing the
    learning rates to rebound as the denominator decays over time.

    Define the Root Mean Square (RMS) error criterion of the gradient as
    :math:RMS[g]_t = \sqrt{E[g2]_t + \epsilon}, where :math:g represents
    gradient and :math:
    E[g
    2]_t
    is the decaying average over past squared gradient.

    The :math:E[g^2]_t is given by:

    .. math::
    E[g2]_t = \gamma * E[g2]_{t-1} + (1-\gamma) * g_t^2

    The update step is

    .. math::
    \theta_{t+1} = \theta_t - \frac{\eta}{RMS[g]_t} g_t

    The RMSProp code follows the version in
    http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
    Tieleman & Hinton, 2012.

    Hinton suggests the momentum term :math:\gamma to be 0.9 and the learning rate
    :math:\eta to be 0.001.



    Defined in src/operator/optimizer_op.cc:L553

    returns

    org.apache.mxnet.NDArray

  339. abstract def rmspropalex_update(args: Any*): NDArrayFuncReturn

    Update function for RMSPropAlex optimizer.

    RMSPropAlex is non-centered version of RMSProp.

    Define :math:E[g^2]_t is the decaying average over past squared gradient and
    :math:
    E[g]_t is the decaying average over past gradient.

    .. math::
    E[g2]_t = \gamma_1 * E[g2]_{t-1} + (1 - \gamma_1) * g_t2\\
    E[g]_t = \gamma_1 * E[g]_{t-1} + (1 - \gamma_1) * g_t\\
    \Delta_t = \gamma_2 * \Delta_{t-1} - \frac{\eta}{\sqrt{E[g
    2]_t - E[g]_t^2 + \epsilon}} g_t\\

    The update step is

    .. math::
    \theta_{t+1} = \theta_t + \Delta_t

    The RMSPropAlex code follows the version in
    http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013.

    Graves suggests the momentum term :math:\gamma_1 to be 0.95, :math:\gamma_2
    to be 0.9 and the learning rate :math:\eta to be 0.0001.


    Defined in src/operator/optimizer_op.cc:L592

    Update function for RMSPropAlex optimizer.

    RMSPropAlex is non-centered version of RMSProp.

    Define :math:E[g^2]_t is the decaying average over past squared gradient and
    :math:
    E[g]_t is the decaying average over past gradient.

    .. math::
    E[g2]_t = \gamma_1 * E[g2]_{t-1} + (1 - \gamma_1) * g_t2\\
    E[g]_t = \gamma_1 * E[g]_{t-1} + (1 - \gamma_1) * g_t\\
    \Delta_t = \gamma_2 * \Delta_{t-1} - \frac{\eta}{\sqrt{E[g
    2]_t - E[g]_t^2 + \epsilon}} g_t\\

    The update step is

    .. math::
    \theta_{t+1} = \theta_t + \Delta_t

    The RMSPropAlex code follows the version in
    http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013.

    Graves suggests the momentum term :math:\gamma_1 to be 0.95, :math:\gamma_2
    to be 0.9 and the learning rate :math:\eta to be 0.0001.


    Defined in src/operator/optimizer_op.cc:L592

    returns

    org.apache.mxnet.NDArray

  340. abstract def rmspropalex_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Update function for RMSPropAlex optimizer.

    RMSPropAlex is non-centered version of RMSProp.

    Define :math:E[g^2]_t is the decaying average over past squared gradient and
    :math:
    E[g]_t is the decaying average over past gradient.

    .. math::
    E[g2]_t = \gamma_1 * E[g2]_{t-1} + (1 - \gamma_1) * g_t2\\
    E[g]_t = \gamma_1 * E[g]_{t-1} + (1 - \gamma_1) * g_t\\
    \Delta_t = \gamma_2 * \Delta_{t-1} - \frac{\eta}{\sqrt{E[g
    2]_t - E[g]_t^2 + \epsilon}} g_t\\

    The update step is

    .. math::
    \theta_{t+1} = \theta_t + \Delta_t

    The RMSPropAlex code follows the version in
    http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013.

    Graves suggests the momentum term :math:\gamma_1 to be 0.95, :math:\gamma_2
    to be 0.9 and the learning rate :math:\eta to be 0.0001.


    Defined in src/operator/optimizer_op.cc:L592

    Update function for RMSPropAlex optimizer.

    RMSPropAlex is non-centered version of RMSProp.

    Define :math:E[g^2]_t is the decaying average over past squared gradient and
    :math:
    E[g]_t is the decaying average over past gradient.

    .. math::
    E[g2]_t = \gamma_1 * E[g2]_{t-1} + (1 - \gamma_1) * g_t2\\
    E[g]_t = \gamma_1 * E[g]_{t-1} + (1 - \gamma_1) * g_t\\
    \Delta_t = \gamma_2 * \Delta_{t-1} - \frac{\eta}{\sqrt{E[g
    2]_t - E[g]_t^2 + \epsilon}} g_t\\

    The update step is

    .. math::
    \theta_{t+1} = \theta_t + \Delta_t

    The RMSPropAlex code follows the version in
    http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013.

    Graves suggests the momentum term :math:\gamma_1 to be 0.95, :math:\gamma_2
    to be 0.9 and the learning rate :math:\eta to be 0.0001.


    Defined in src/operator/optimizer_op.cc:L592

    returns

    org.apache.mxnet.NDArray

  341. abstract def round(args: Any*): NDArrayFuncReturn

    Returns element-wise rounded value to the nearest integer of the input.

    Example::

    round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 2., -2., 2., 2.]

    The storage type of round output depends upon the input storage type:

    Returns element-wise rounded value to the nearest integer of the input.

    Example::

    round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 2., -2., 2., 2.]

    The storage type of round output depends upon the input storage type:

    • round(default) = default
    • round(row_sparse) = row_sparse
    • round(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L616
    returns

    org.apache.mxnet.NDArray

  342. abstract def round(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise rounded value to the nearest integer of the input.

    Example::

    round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 2., -2., 2., 2.]

    The storage type of round output depends upon the input storage type:

    Returns element-wise rounded value to the nearest integer of the input.

    Example::

    round([-1.5, 1.5, -1.9, 1.9, 2.1]) = [-2., 2., -2., 2., 2.]

    The storage type of round output depends upon the input storage type:

    • round(default) = default
    • round(row_sparse) = row_sparse
    • round(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L616
    returns

    org.apache.mxnet.NDArray

  343. abstract def rsqrt(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse square-root value of the input.

    ..

    Returns element-wise inverse square-root value of the input.

    .. math::
    rsqrt(x) = 1/\sqrt{x}

    Example::

    rsqrt([4,9,16]) = [0.5, 0.33333334, 0.25]

    The storage type of rsqrt output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L776

    returns

    org.apache.mxnet.NDArray

  344. abstract def rsqrt(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise inverse square-root value of the input.

    ..

    Returns element-wise inverse square-root value of the input.

    .. math::
    rsqrt(x) = 1/\sqrt{x}

    Example::

    rsqrt([4,9,16]) = [0.5, 0.33333334, 0.25]

    The storage type of rsqrt output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L776

    returns

    org.apache.mxnet.NDArray

  345. abstract def sample_exponential(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    exponential distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    exponential distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input array, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input value at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input array.

    Examples::

    lam = [ 1.0, 8.5 ]

    // Draw a single sample for each distribution
    sample_exponential(lam) = [ 0.51837951, 0.09994757]

    // Draw a vector containing two samples for each distribution
    sample_exponential(lam, shape=(2)) = 0.51837951, 0.19866663],
    [ 0.09994757, 0.50447971



    Defined in src/operator/random/multisample_op.cc:L284

    returns

    org.apache.mxnet.NDArray

  346. abstract def sample_exponential(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    exponential distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    exponential distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input array, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input value at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input array.

    Examples::

    lam = [ 1.0, 8.5 ]

    // Draw a single sample for each distribution
    sample_exponential(lam) = [ 0.51837951, 0.09994757]

    // Draw a vector containing two samples for each distribution
    sample_exponential(lam, shape=(2)) = 0.51837951, 0.19866663],
    [ 0.09994757, 0.50447971



    Defined in src/operator/random/multisample_op.cc:L284

    returns

    org.apache.mxnet.NDArray

  347. abstract def sample_gamma(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    gamma distributions with parameters *alpha* (shape) and *beta* (scale).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    gamma distributions with parameters *alpha* (shape) and *beta* (scale).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Examples::

    alpha = [ 0.0, 2.5 ]
    beta = [ 1.0, 0.7 ]

    // Draw a single sample for each distribution
    sample_gamma(alpha, beta) = [ 0. , 2.25797319]

    // Draw a vector containing two samples for each distribution
    sample_gamma(alpha, beta, shape=(2)) = 0. , 0. ],
    [ 2.25797319, 1.70734084



    Defined in src/operator/random/multisample_op.cc:L282

    returns

    org.apache.mxnet.NDArray

  348. abstract def sample_gamma(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    gamma distributions with parameters *alpha* (shape) and *beta* (scale).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    gamma distributions with parameters *alpha* (shape) and *beta* (scale).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Examples::

    alpha = [ 0.0, 2.5 ]
    beta = [ 1.0, 0.7 ]

    // Draw a single sample for each distribution
    sample_gamma(alpha, beta) = [ 0. , 2.25797319]

    // Draw a vector containing two samples for each distribution
    sample_gamma(alpha, beta, shape=(2)) = 0. , 0. ],
    [ 2.25797319, 1.70734084



    Defined in src/operator/random/multisample_op.cc:L282

    returns

    org.apache.mxnet.NDArray

  349. abstract def sample_generalized_negative_binomial(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    generalized negative binomial distributions with parameters *mu* (mean) and *alpha* (dispersion).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    generalized negative binomial distributions with parameters *mu* (mean) and *alpha* (dispersion).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Samples will always be returned as a floating point data type.

    Examples::

    mu = [ 2.0, 2.5 ]
    alpha = [ 1.0, 0.1 ]

    // Draw a single sample for each distribution
    sample_generalized_negative_binomial(mu, alpha) = [ 0., 3.]

    // Draw a vector containing two samples for each distribution
    sample_generalized_negative_binomial(mu, alpha, shape=(2)) = 0., 3.],
    [ 3., 1.



    Defined in src/operator/random/multisample_op.cc:L293

    returns

    org.apache.mxnet.NDArray

  350. abstract def sample_generalized_negative_binomial(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    generalized negative binomial distributions with parameters *mu* (mean) and *alpha* (dispersion).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    generalized negative binomial distributions with parameters *mu* (mean) and *alpha* (dispersion).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Samples will always be returned as a floating point data type.

    Examples::

    mu = [ 2.0, 2.5 ]
    alpha = [ 1.0, 0.1 ]

    // Draw a single sample for each distribution
    sample_generalized_negative_binomial(mu, alpha) = [ 0., 3.]

    // Draw a vector containing two samples for each distribution
    sample_generalized_negative_binomial(mu, alpha, shape=(2)) = 0., 3.],
    [ 3., 1.



    Defined in src/operator/random/multisample_op.cc:L293

    returns

    org.apache.mxnet.NDArray

  351. abstract def sample_multinomial(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple multinomial distributions.

    *data* is an *n* dimensional array whose last dimension has length *k*, where
    *k* is the number of possible outcomes of each multinomial distribution.

    Concurrent sampling from multiple multinomial distributions.

    *data* is an *n* dimensional array whose last dimension has length *k*, where
    *k* is the number of possible outcomes of each multinomial distribution. This
    operator will draw *shape* samples from each distribution. If shape is empty
    one sample will be drawn from each distribution.

    If *get_prob* is true, a second array containing log likelihood of the drawn
    samples will also be returned. This is usually used for reinforcement learning
    where you can provide reward as head gradient for this array to estimate
    gradient.

    Note that the input distribution must be normalized, i.e. *data* must sum to
    1 along its last axis.

    Examples::

    probs = 0.1, 0.2, 0.3, 0.4], [0.4, 0.3, 0.2, 0.1, 0

    // Draw a single sample for each distribution
    sample_multinomial(probs) = [3, 0]

    // Draw a vector containing two samples for each distribution
    sample_multinomial(probs, shape=(2)) = 2],
    [0, 0


    // requests log likelihood
    sample_multinomial(probs, get_prob=True) = [2, 1], [0.2, 0.3]

    returns

    org.apache.mxnet.NDArray

  352. abstract def sample_multinomial(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple multinomial distributions.

    *data* is an *n* dimensional array whose last dimension has length *k*, where
    *k* is the number of possible outcomes of each multinomial distribution.

    Concurrent sampling from multiple multinomial distributions.

    *data* is an *n* dimensional array whose last dimension has length *k*, where
    *k* is the number of possible outcomes of each multinomial distribution. This
    operator will draw *shape* samples from each distribution. If shape is empty
    one sample will be drawn from each distribution.

    If *get_prob* is true, a second array containing log likelihood of the drawn
    samples will also be returned. This is usually used for reinforcement learning
    where you can provide reward as head gradient for this array to estimate
    gradient.

    Note that the input distribution must be normalized, i.e. *data* must sum to
    1 along its last axis.

    Examples::

    probs = 0.1, 0.2, 0.3, 0.4], [0.4, 0.3, 0.2, 0.1, 0

    // Draw a single sample for each distribution
    sample_multinomial(probs) = [3, 0]

    // Draw a vector containing two samples for each distribution
    sample_multinomial(probs, shape=(2)) = 2],
    [0, 0


    // requests log likelihood
    sample_multinomial(probs, get_prob=True) = [2, 1], [0.2, 0.3]

    returns

    org.apache.mxnet.NDArray

  353. abstract def sample_negative_binomial(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    negative binomial distributions with parameters *k* (failure limit) and *p* (failure probability).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    negative binomial distributions with parameters *k* (failure limit) and *p* (failure probability).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Samples will always be returned as a floating point data type.

    Examples::

    k = [ 20, 49 ]
    p = [ 0.4 , 0.77 ]

    // Draw a single sample for each distribution
    sample_negative_binomial(k, p) = [ 15., 16.]

    // Draw a vector containing two samples for each distribution
    sample_negative_binomial(k, p, shape=(2)) = 15., 50.],
    [ 16., 12.



    Defined in src/operator/random/multisample_op.cc:L289

    returns

    org.apache.mxnet.NDArray

  354. abstract def sample_negative_binomial(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    negative binomial distributions with parameters *k* (failure limit) and *p* (failure probability).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    negative binomial distributions with parameters *k* (failure limit) and *p* (failure probability).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Samples will always be returned as a floating point data type.

    Examples::

    k = [ 20, 49 ]
    p = [ 0.4 , 0.77 ]

    // Draw a single sample for each distribution
    sample_negative_binomial(k, p) = [ 15., 16.]

    // Draw a vector containing two samples for each distribution
    sample_negative_binomial(k, p, shape=(2)) = 15., 50.],
    [ 16., 12.



    Defined in src/operator/random/multisample_op.cc:L289

    returns

    org.apache.mxnet.NDArray

  355. abstract def sample_normal(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    normal distributions with parameters *mu* (mean) and *sigma* (standard deviation).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    normal distributions with parameters *mu* (mean) and *sigma* (standard deviation).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Examples::

    mu = [ 0.0, 2.5 ]
    sigma = [ 1.0, 3.7 ]

    // Draw a single sample for each distribution
    sample_normal(mu, sigma) = [-0.56410581, 0.95934606]

    // Draw a vector containing two samples for each distribution
    sample_normal(mu, sigma, shape=(2)) = 0.2928229 ],
    [ 0.95934606, 4.48287058



    Defined in src/operator/random/multisample_op.cc:L279

    returns

    org.apache.mxnet.NDArray

  356. abstract def sample_normal(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    normal distributions with parameters *mu* (mean) and *sigma* (standard deviation).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    normal distributions with parameters *mu* (mean) and *sigma* (standard deviation).

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Examples::

    mu = [ 0.0, 2.5 ]
    sigma = [ 1.0, 3.7 ]

    // Draw a single sample for each distribution
    sample_normal(mu, sigma) = [-0.56410581, 0.95934606]

    // Draw a vector containing two samples for each distribution
    sample_normal(mu, sigma, shape=(2)) = 0.2928229 ],
    [ 0.95934606, 4.48287058



    Defined in src/operator/random/multisample_op.cc:L279

    returns

    org.apache.mxnet.NDArray

  357. abstract def sample_poisson(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    Poisson distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    Poisson distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input array, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input value at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input array.

    Samples will always be returned as a floating point data type.

    Examples::

    lam = [ 1.0, 8.5 ]

    // Draw a single sample for each distribution
    sample_poisson(lam) = [ 0., 13.]

    // Draw a vector containing two samples for each distribution
    sample_poisson(lam, shape=(2)) = 0., 4.],
    [ 13., 8.



    Defined in src/operator/random/multisample_op.cc:L286

    returns

    org.apache.mxnet.NDArray

  358. abstract def sample_poisson(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    Poisson distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    Poisson distributions with parameters lambda (rate).

    The parameters of the distributions are provided as an input array.
    Let *[s]* be the shape of the input array, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input array, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input value at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input array.

    Samples will always be returned as a floating point data type.

    Examples::

    lam = [ 1.0, 8.5 ]

    // Draw a single sample for each distribution
    sample_poisson(lam) = [ 0., 13.]

    // Draw a vector containing two samples for each distribution
    sample_poisson(lam, shape=(2)) = 0., 4.],
    [ 13., 8.



    Defined in src/operator/random/multisample_op.cc:L286

    returns

    org.apache.mxnet.NDArray

  359. abstract def sample_uniform(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    uniform distributions on the intervals given by *[low,high)*.

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    uniform distributions on the intervals given by *[low,high)*.

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Examples::

    low = [ 0.0, 2.5 ]
    high = [ 1.0, 3.7 ]

    // Draw a single sample for each distribution
    sample_uniform(low, high) = [ 0.40451524, 3.18687344]

    // Draw a vector containing two samples for each distribution
    sample_uniform(low, high, shape=(2)) = 0.40451524, 0.18017688],
    [ 3.18687344, 3.68352246



    Defined in src/operator/random/multisample_op.cc:L277

    returns

    org.apache.mxnet.NDArray

  360. abstract def sample_uniform(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Concurrent sampling from multiple
    uniform distributions on the intervals given by *[low,high)*.

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*.

    Concurrent sampling from multiple
    uniform distributions on the intervals given by *[low,high)*.

    The parameters of the distributions are provided as input arrays.
    Let *[s]* be the shape of the input arrays, *n* be the dimension of *[s]*, *[t]*
    be the shape specified as the parameter of the operator, and *m* be the dimension
    of *[t]*. Then the output will be a *(n+m)*-dimensional array with shape *[s]x[t]*.

    For any valid *n*-dimensional index *i* with respect to the input arrays, *output[i]*
    will be an *m*-dimensional array that holds randomly drawn samples from the distribution
    which is parameterized by the input values at index *i*. If the shape parameter of the
    operator is not set, then one sample will be drawn per distribution and the output array
    has the same shape as the input arrays.

    Examples::

    low = [ 0.0, 2.5 ]
    high = [ 1.0, 3.7 ]

    // Draw a single sample for each distribution
    sample_uniform(low, high) = [ 0.40451524, 3.18687344]

    // Draw a vector containing two samples for each distribution
    sample_uniform(low, high, shape=(2)) = 0.40451524, 0.18017688],
    [ 3.18687344, 3.68352246



    Defined in src/operator/random/multisample_op.cc:L277

    returns

    org.apache.mxnet.NDArray

  361. abstract def scatter_nd(args: Any*): NDArrayFuncReturn

    Scatters data into a new tensor according to indices.

    Given data with shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (X_0, X_1, ..., X_{N-1}),
    where M <= N.

    Scatters data into a new tensor according to indices.

    Given data with shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (X_0, X_1, ..., X_{N-1}),
    where M <= N. If M == N, data shape should simply be (Y_0, ..., Y_{K-1}).

    The elements in output is defined as follows::

    output[indices[0, y_0, ..., y_{K-1}],
    ...,
    indices[M-1, y_0, ..., y_{K-1}],
    x_M, ..., x_{N-1}] = data[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}]

    all other entries in output are 0.

    .. warning::

    If the indices have duplicates, the result will be non-deterministic and
    the gradient of scatter_nd will not be correct!!


    Examples::

    data = [2, 3, 0]
    indices = 1, 0], [0, 1, 0
    shape = (2, 2)
    scatter_nd(data, indices, shape) = 0], [2, 3

    returns

    org.apache.mxnet.NDArray

  362. abstract def scatter_nd(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Scatters data into a new tensor according to indices.

    Given data with shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (X_0, X_1, ..., X_{N-1}),
    where M <= N.

    Scatters data into a new tensor according to indices.

    Given data with shape (Y_0, ..., Y_{K-1}, X_M, ..., X_{N-1}) and indices with shape
    (M, Y_0, ..., Y_{K-1}), the output will have shape (X_0, X_1, ..., X_{N-1}),
    where M <= N. If M == N, data shape should simply be (Y_0, ..., Y_{K-1}).

    The elements in output is defined as follows::

    output[indices[0, y_0, ..., y_{K-1}],
    ...,
    indices[M-1, y_0, ..., y_{K-1}],
    x_M, ..., x_{N-1}] = data[y_0, ..., y_{K-1}, x_M, ..., x_{N-1}]

    all other entries in output are 0.

    .. warning::

    If the indices have duplicates, the result will be non-deterministic and
    the gradient of scatter_nd will not be correct!!


    Examples::

    data = [2, 3, 0]
    indices = 1, 0], [0, 1, 0
    shape = (2, 2)
    scatter_nd(data, indices, shape) = 0], [2, 3

    returns

    org.apache.mxnet.NDArray

  363. abstract def sgd_mom_update(args: Any*): NDArrayFuncReturn

    Momentum update function for Stochastic Gradient Descent (SGD) optimizer.

    Momentum update has better convergence rates on neural networks.

    Momentum update function for Stochastic Gradient Descent (SGD) optimizer.

    Momentum update has better convergence rates on neural networks. Mathematically it looks
    like below:

    .. math::

    v_1 = \alpha * \nabla J(W_0)\\
    v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\
    W_t = W_{t-1} + v_t

    It updates the weights using::

    v = momentum * v - learning_rate * gradient
    weight += v

    Where the parameter momentum is the decay rate of momentum estimates at each epoch.

    However, if grad's storage type is row_sparse, lazy_update is True and weight's storage
    type is the same as momentum's storage type,
    only the row slices whose indices appear in grad.indices are updated (for both weight and momentum)::

    for row in gradient.indices:
    v[row] = momentum[row] * v[row] - learning_rate * gradient[row]
    weight[row] += v[row]



    Defined in src/operator/optimizer_op.cc:L372

    returns

    org.apache.mxnet.NDArray

  364. abstract def sgd_mom_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Momentum update function for Stochastic Gradient Descent (SGD) optimizer.

    Momentum update has better convergence rates on neural networks.

    Momentum update function for Stochastic Gradient Descent (SGD) optimizer.

    Momentum update has better convergence rates on neural networks. Mathematically it looks
    like below:

    .. math::

    v_1 = \alpha * \nabla J(W_0)\\
    v_t = \gamma v_{t-1} - \alpha * \nabla J(W_{t-1})\\
    W_t = W_{t-1} + v_t

    It updates the weights using::

    v = momentum * v - learning_rate * gradient
    weight += v

    Where the parameter momentum is the decay rate of momentum estimates at each epoch.

    However, if grad's storage type is row_sparse, lazy_update is True and weight's storage
    type is the same as momentum's storage type,
    only the row slices whose indices appear in grad.indices are updated (for both weight and momentum)::

    for row in gradient.indices:
    v[row] = momentum[row] * v[row] - learning_rate * gradient[row]
    weight[row] += v[row]



    Defined in src/operator/optimizer_op.cc:L372

    returns

    org.apache.mxnet.NDArray

  365. abstract def sgd_update(args: Any*): NDArrayFuncReturn

    Update function for Stochastic Gradient Descent (SDG) optimizer.

    It updates the weights using::

    weight = weight - learning_rate * (gradient + wd * weight)

    However, if gradient is of row_sparse storage type and lazy_update is True,
    only the row slices whose indices appear in grad.indices are updated::

    for row in gradient.indices:
    weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])



    Defined in src/operator/optimizer_op.cc:L331

    Update function for Stochastic Gradient Descent (SDG) optimizer.

    It updates the weights using::

    weight = weight - learning_rate * (gradient + wd * weight)

    However, if gradient is of row_sparse storage type and lazy_update is True,
    only the row slices whose indices appear in grad.indices are updated::

    for row in gradient.indices:
    weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])



    Defined in src/operator/optimizer_op.cc:L331

    returns

    org.apache.mxnet.NDArray

  366. abstract def sgd_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Update function for Stochastic Gradient Descent (SDG) optimizer.

    It updates the weights using::

    weight = weight - learning_rate * (gradient + wd * weight)

    However, if gradient is of row_sparse storage type and lazy_update is True,
    only the row slices whose indices appear in grad.indices are updated::

    for row in gradient.indices:
    weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])



    Defined in src/operator/optimizer_op.cc:L331

    Update function for Stochastic Gradient Descent (SDG) optimizer.

    It updates the weights using::

    weight = weight - learning_rate * (gradient + wd * weight)

    However, if gradient is of row_sparse storage type and lazy_update is True,
    only the row slices whose indices appear in grad.indices are updated::

    for row in gradient.indices:
    weight[row] = weight[row] - learning_rate * (gradient[row] + wd * weight[row])



    Defined in src/operator/optimizer_op.cc:L331

    returns

    org.apache.mxnet.NDArray

  367. abstract def shape_array(args: Any*): NDArrayFuncReturn

    Returns a 1D int64 array containing the shape of data.

    Example::

    shape_array([5,6,7,8) = [2,4]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L416

    Returns a 1D int64 array containing the shape of data.

    Example::

    shape_array([5,6,7,8) = [2,4]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L416

    returns

    org.apache.mxnet.NDArray

  368. abstract def shape_array(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns a 1D int64 array containing the shape of data.

    Example::

    shape_array([5,6,7,8) = [2,4]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L416

    Returns a 1D int64 array containing the shape of data.

    Example::

    shape_array([5,6,7,8) = [2,4]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L416

    returns

    org.apache.mxnet.NDArray

  369. abstract def shuffle(args: Any*): NDArrayFuncReturn

    Randomly shuffle the elements.

    This shuffles the array along the first axis.
    The order of the elements in each subarray does not change.
    For example, if a 2D array is given, the order of the rows randomly changes,
    but the order of the elements in each row does not change.

    Randomly shuffle the elements.

    This shuffles the array along the first axis.
    The order of the elements in each subarray does not change.
    For example, if a 2D array is given, the order of the rows randomly changes,
    but the order of the elements in each row does not change.

    returns

    org.apache.mxnet.NDArray

  370. abstract def shuffle(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Randomly shuffle the elements.

    This shuffles the array along the first axis.
    The order of the elements in each subarray does not change.
    For example, if a 2D array is given, the order of the rows randomly changes,
    but the order of the elements in each row does not change.

    Randomly shuffle the elements.

    This shuffles the array along the first axis.
    The order of the elements in each subarray does not change.
    For example, if a 2D array is given, the order of the rows randomly changes,
    but the order of the elements in each row does not change.

    returns

    org.apache.mxnet.NDArray

  371. abstract def sigmoid(args: Any*): NDArrayFuncReturn

    Computes sigmoid of x element-wise.

    ..

    Computes sigmoid of x element-wise.

    .. math::
    y = 1 / (1 + exp(-x))

    The storage type of sigmoid output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L101

    returns

    org.apache.mxnet.NDArray

  372. abstract def sigmoid(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes sigmoid of x element-wise.

    ..

    Computes sigmoid of x element-wise.

    .. math::
    y = 1 / (1 + exp(-x))

    The storage type of sigmoid output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L101

    returns

    org.apache.mxnet.NDArray

  373. abstract def sign(args: Any*): NDArrayFuncReturn

    Returns element-wise sign of the input.

    Example::

    sign([-2, 0, 3]) = [-1, 0, 1]

    The storage type of sign output depends upon the input storage type:

    Returns element-wise sign of the input.

    Example::

    sign([-2, 0, 3]) = [-1, 0, 1]

    The storage type of sign output depends upon the input storage type:

    • sign(default) = default
    • sign(row_sparse) = row_sparse
    • sign(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L597
    returns

    org.apache.mxnet.NDArray

  374. abstract def sign(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns element-wise sign of the input.

    Example::

    sign([-2, 0, 3]) = [-1, 0, 1]

    The storage type of sign output depends upon the input storage type:

    Returns element-wise sign of the input.

    Example::

    sign([-2, 0, 3]) = [-1, 0, 1]

    The storage type of sign output depends upon the input storage type:

    • sign(default) = default
    • sign(row_sparse) = row_sparse
    • sign(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L597
    returns

    org.apache.mxnet.NDArray

  375. abstract def signsgd_update(args: Any*): NDArrayFuncReturn

    Update function for SignSGD optimizer.

    ..

    Update function for SignSGD optimizer.

    .. math::

    g_t = \nabla J(W_{t-1})\\
    W_t = W_{t-1} - \eta_t \text{sign}(g_t)

    It updates the weights using::

    weight = weight - learning_rate * sign(gradient)

    .. note::

    • sparse ndarray not supported for this optimizer yet.


      Defined in src/operator/optimizer_op.cc:L57
    returns

    org.apache.mxnet.NDArray

  376. abstract def signsgd_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Update function for SignSGD optimizer.

    ..

    Update function for SignSGD optimizer.

    .. math::

    g_t = \nabla J(W_{t-1})\\
    W_t = W_{t-1} - \eta_t \text{sign}(g_t)

    It updates the weights using::

    weight = weight - learning_rate * sign(gradient)

    .. note::

    • sparse ndarray not supported for this optimizer yet.


      Defined in src/operator/optimizer_op.cc:L57
    returns

    org.apache.mxnet.NDArray

  377. abstract def signum_update(args: Any*): NDArrayFuncReturn

    SIGN momentUM (Signum) optimizer.

    ..

    SIGN momentUM (Signum) optimizer.

    .. math::

    g_t = \nabla J(W_{t-1})\\
    m_t = \beta m_{t-1} + (1 - \beta) g_t\\
    W_t = W_{t-1} - \eta_t \text{sign}(m_t)

    It updates the weights using::
    state = momentum * state + (1-momentum) * gradient
    weight = weight - learning_rate * sign(state)

    Where the parameter momentum is the decay rate of momentum estimates at each epoch.

    .. note::

    • sparse ndarray not supported for this optimizer yet.


      Defined in src/operator/optimizer_op.cc:L86
    returns

    org.apache.mxnet.NDArray

  378. abstract def signum_update(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    SIGN momentUM (Signum) optimizer.

    ..

    SIGN momentUM (Signum) optimizer.

    .. math::

    g_t = \nabla J(W_{t-1})\\
    m_t = \beta m_{t-1} + (1 - \beta) g_t\\
    W_t = W_{t-1} - \eta_t \text{sign}(m_t)

    It updates the weights using::
    state = momentum * state + (1-momentum) * gradient
    weight = weight - learning_rate * sign(state)

    Where the parameter momentum is the decay rate of momentum estimates at each epoch.

    .. note::

    • sparse ndarray not supported for this optimizer yet.


      Defined in src/operator/optimizer_op.cc:L86
    returns

    org.apache.mxnet.NDArray

  379. abstract def sin(args: Any*): NDArrayFuncReturn

    Computes the element-wise sine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    ..

    Computes the element-wise sine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    .. math::
    sin([0, \pi/4, \pi/2]) = [0, 0.707, 1]

    The storage type of sin output depends upon the input storage type:

    • sin(default) = default
    • sin(row_sparse) = row_sparse
    • sin(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L46
    returns

    org.apache.mxnet.NDArray

  380. abstract def sin(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes the element-wise sine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    ..

    Computes the element-wise sine of the input array.

    The input should be in radians (:math:2\pi rad equals 360 degrees).

    .. math::
    sin([0, \pi/4, \pi/2]) = [0, 0.707, 1]

    The storage type of sin output depends upon the input storage type:

    • sin(default) = default
    • sin(row_sparse) = row_sparse
    • sin(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L46
    returns

    org.apache.mxnet.NDArray

  381. abstract def sinh(args: Any*): NDArrayFuncReturn

    Returns the hyperbolic sine of the input array, computed element-wise.

    ..

    Returns the hyperbolic sine of the input array, computed element-wise.

    .. math::
    sinh(x) = 0.5\times(exp(x) - exp(-x))

    The storage type of sinh output depends upon the input storage type:

    • sinh(default) = default
    • sinh(row_sparse) = row_sparse
    • sinh(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L201
    returns

    org.apache.mxnet.NDArray

  382. abstract def sinh(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns the hyperbolic sine of the input array, computed element-wise.

    ..

    Returns the hyperbolic sine of the input array, computed element-wise.

    .. math::
    sinh(x) = 0.5\times(exp(x) - exp(-x))

    The storage type of sinh output depends upon the input storage type:

    • sinh(default) = default
    • sinh(row_sparse) = row_sparse
    • sinh(csr) = csr



      Defined in src/operator/tensor/elemwise_unary_op_trig.cc:L201
    returns

    org.apache.mxnet.NDArray

  383. abstract def size_array(args: Any*): NDArrayFuncReturn

    Returns a 1D int64 array containing the size of data.

    Example::

    size_array([5,6,7,8) = [8]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L466

    Returns a 1D int64 array containing the size of data.

    Example::

    size_array([5,6,7,8) = [8]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L466

    returns

    org.apache.mxnet.NDArray

  384. abstract def size_array(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns a 1D int64 array containing the size of data.

    Example::

    size_array([5,6,7,8) = [8]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L466

    Returns a 1D int64 array containing the size of data.

    Example::

    size_array([5,6,7,8) = [8]



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L466

    returns

    org.apache.mxnet.NDArray

  385. abstract def slice(args: Any*): NDArrayFuncReturn

    Slices a region of the array.

    ..

    Slices a region of the array.

    .. note:: crop is deprecated. Use slice instead.

    This function returns a sliced array between the indices given
    by begin and end with the corresponding step.

    For an input array of shape=(d_0, d_1, ..., d_n-1),
    slice operation with begin=(b_0, b_1...b_m-1),
    end=(e_0, e_1, ..., e_m-1), and step=(s_0, s_1, ..., s_m-1),
    where m <= n, results in an array with the shape
    (|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1).

    The resulting array's *k*-th dimension contains elements
    from the *k*-th dimension of the input array starting
    from index b_k (inclusive) with step s_k
    until reaching e_k (exclusive).

    If the *k*-th elements are None in the sequence of begin, end,
    and step, the following rule will be used to set default values.
    If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k;
    else, set b_k=d_k-1, e_k=-1.

    The storage type of slice output depends on storage types of inputs

    - slice(csr) = csr
    - otherwise, slice generates output with default storage

    .. note:: When input data storage type is csr, it only supports
    step=(), or step=(None,), or step=(1,) to generate a csr output.
    For other step parameter values, it falls back to slicing
    a dense tensor.

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice(x, begin=(0,1), end=(2,4)) = 2., 3., 4.],
    [ 6., 7., 8.

    slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = 11.],
    [5., 7.],
    [1., 3.



    Defined in src/operator/tensor/matrix_op.cc:L412

    returns

    org.apache.mxnet.NDArray

  386. abstract def slice(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Slices a region of the array.

    ..

    Slices a region of the array.

    .. note:: crop is deprecated. Use slice instead.

    This function returns a sliced array between the indices given
    by begin and end with the corresponding step.

    For an input array of shape=(d_0, d_1, ..., d_n-1),
    slice operation with begin=(b_0, b_1...b_m-1),
    end=(e_0, e_1, ..., e_m-1), and step=(s_0, s_1, ..., s_m-1),
    where m <= n, results in an array with the shape
    (|e_0-b_0|/|s_0|, ..., |e_m-1-b_m-1|/|s_m-1|, d_m, ..., d_n-1).

    The resulting array's *k*-th dimension contains elements
    from the *k*-th dimension of the input array starting
    from index b_k (inclusive) with step s_k
    until reaching e_k (exclusive).

    If the *k*-th elements are None in the sequence of begin, end,
    and step, the following rule will be used to set default values.
    If s_k is None, set s_k=1. If s_k > 0, set b_k=0, e_k=d_k;
    else, set b_k=d_k-1, e_k=-1.

    The storage type of slice output depends on storage types of inputs

    - slice(csr) = csr
    - otherwise, slice generates output with default storage

    .. note:: When input data storage type is csr, it only supports
    step=(), or step=(None,), or step=(1,) to generate a csr output.
    For other step parameter values, it falls back to slicing
    a dense tensor.

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice(x, begin=(0,1), end=(2,4)) = 2., 3., 4.],
    [ 6., 7., 8.

    slice(x, begin=(None, 0), end=(None, 3), step=(-1, 2)) = 11.],
    [5., 7.],
    [1., 3.



    Defined in src/operator/tensor/matrix_op.cc:L412

    returns

    org.apache.mxnet.NDArray

  387. abstract def slice_axis(args: Any*): NDArrayFuncReturn

    Slices along a given axis.

    Returns an array slice along a given axis starting from the begin index
    to the end index.

    Examples::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=0, begin=1, end=3) = 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=1, begin=0, end=2) = 1., 2.],
    [ 5., 6.],
    [ 9., 10.


    slice_axis(x, axis=1, begin=-3, end=-1) = 2., 3.],
    [ 6., 7.],
    [ 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L499

    Slices along a given axis.

    Returns an array slice along a given axis starting from the begin index
    to the end index.

    Examples::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=0, begin=1, end=3) = 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=1, begin=0, end=2) = 1., 2.],
    [ 5., 6.],
    [ 9., 10.


    slice_axis(x, axis=1, begin=-3, end=-1) = 2., 3.],
    [ 6., 7.],
    [ 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L499

    returns

    org.apache.mxnet.NDArray

  388. abstract def slice_axis(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Slices along a given axis.

    Returns an array slice along a given axis starting from the begin index
    to the end index.

    Examples::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=0, begin=1, end=3) = 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=1, begin=0, end=2) = 1., 2.],
    [ 5., 6.],
    [ 9., 10.


    slice_axis(x, axis=1, begin=-3, end=-1) = 2., 3.],
    [ 6., 7.],
    [ 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L499

    Slices along a given axis.

    Returns an array slice along a given axis starting from the begin index
    to the end index.

    Examples::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=0, begin=1, end=3) = 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    slice_axis(x, axis=1, begin=0, end=2) = 1., 2.],
    [ 5., 6.],
    [ 9., 10.


    slice_axis(x, axis=1, begin=-3, end=-1) = 2., 3.],
    [ 6., 7.],
    [ 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L499

    returns

    org.apache.mxnet.NDArray

  389. abstract def slice_like(args: Any*): NDArrayFuncReturn

    Slices a region of the array like the shape of another array.

    This function is similar to slice, however, the begin are always 0s
    and end of specific axes are inferred from the second input shape_like.

    Given the second shape_like input of shape=(d_0, d_1, ..., d_n-1),
    a slice_like operator with default empty axes, it performs the
    following operation:

    out = slice(input, begin=(0, 0, ..., 0), end=(d_0, d_1, ..., d_n-1)).

    When axes is not empty, it is used to speficy which axes are being sliced.

    Given a 4-d input data, slice_like operator with axes=(0, 2, -1)
    will perform the following operation:

    out = slice(input, begin=(0, 0, 0, 0), end=(d_0, None, d_2, d_3)).

    Note that it is allowed to have first and second input with different dimensions,
    however, you have to make sure the axes are specified and not exceeding the
    dimension limits.

    For example, given input_1 with shape=(2,3,4,5) and input_2 with
    shape=(1,2,3), it is not allowed to use:

    out = slice_like(a, b) because ndim of input_1 is 4, and ndim of input_2
    is 3.

    The following is allowed in this situation:

    out = slice_like(a, b, axes=(0, 2))

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    y = 0., 0., 0.],
    [ 0., 0., 0.


    slice_like(x, y) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0, 1)) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0)) = 1., 2., 3., 4.]
    [ 5., 6., 7., 8.

    slice_like(x, y, axes=(-1)) = 1., 2., 3.]
    [ 5., 6., 7.]
    [ 9., 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L568

    Slices a region of the array like the shape of another array.

    This function is similar to slice, however, the begin are always 0s
    and end of specific axes are inferred from the second input shape_like.

    Given the second shape_like input of shape=(d_0, d_1, ..., d_n-1),
    a slice_like operator with default empty axes, it performs the
    following operation:

    out = slice(input, begin=(0, 0, ..., 0), end=(d_0, d_1, ..., d_n-1)).

    When axes is not empty, it is used to speficy which axes are being sliced.

    Given a 4-d input data, slice_like operator with axes=(0, 2, -1)
    will perform the following operation:

    out = slice(input, begin=(0, 0, 0, 0), end=(d_0, None, d_2, d_3)).

    Note that it is allowed to have first and second input with different dimensions,
    however, you have to make sure the axes are specified and not exceeding the
    dimension limits.

    For example, given input_1 with shape=(2,3,4,5) and input_2 with
    shape=(1,2,3), it is not allowed to use:

    out = slice_like(a, b) because ndim of input_1 is 4, and ndim of input_2
    is 3.

    The following is allowed in this situation:

    out = slice_like(a, b, axes=(0, 2))

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    y = 0., 0., 0.],
    [ 0., 0., 0.


    slice_like(x, y) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0, 1)) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0)) = 1., 2., 3., 4.]
    [ 5., 6., 7., 8.

    slice_like(x, y, axes=(-1)) = 1., 2., 3.]
    [ 5., 6., 7.]
    [ 9., 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L568

    returns

    org.apache.mxnet.NDArray

  390. abstract def slice_like(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Slices a region of the array like the shape of another array.

    This function is similar to slice, however, the begin are always 0s
    and end of specific axes are inferred from the second input shape_like.

    Given the second shape_like input of shape=(d_0, d_1, ..., d_n-1),
    a slice_like operator with default empty axes, it performs the
    following operation:

    out = slice(input, begin=(0, 0, ..., 0), end=(d_0, d_1, ..., d_n-1)).

    When axes is not empty, it is used to speficy which axes are being sliced.

    Given a 4-d input data, slice_like operator with axes=(0, 2, -1)
    will perform the following operation:

    out = slice(input, begin=(0, 0, 0, 0), end=(d_0, None, d_2, d_3)).

    Note that it is allowed to have first and second input with different dimensions,
    however, you have to make sure the axes are specified and not exceeding the
    dimension limits.

    For example, given input_1 with shape=(2,3,4,5) and input_2 with
    shape=(1,2,3), it is not allowed to use:

    out = slice_like(a, b) because ndim of input_1 is 4, and ndim of input_2
    is 3.

    The following is allowed in this situation:

    out = slice_like(a, b, axes=(0, 2))

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    y = 0., 0., 0.],
    [ 0., 0., 0.


    slice_like(x, y) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0, 1)) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0)) = 1., 2., 3., 4.]
    [ 5., 6., 7., 8.

    slice_like(x, y, axes=(-1)) = 1., 2., 3.]
    [ 5., 6., 7.]
    [ 9., 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L568

    Slices a region of the array like the shape of another array.

    This function is similar to slice, however, the begin are always 0s
    and end of specific axes are inferred from the second input shape_like.

    Given the second shape_like input of shape=(d_0, d_1, ..., d_n-1),
    a slice_like operator with default empty axes, it performs the
    following operation:

    out = slice(input, begin=(0, 0, ..., 0), end=(d_0, d_1, ..., d_n-1)).

    When axes is not empty, it is used to speficy which axes are being sliced.

    Given a 4-d input data, slice_like operator with axes=(0, 2, -1)
    will perform the following operation:

    out = slice(input, begin=(0, 0, 0, 0), end=(d_0, None, d_2, d_3)).

    Note that it is allowed to have first and second input with different dimensions,
    however, you have to make sure the axes are specified and not exceeding the
    dimension limits.

    For example, given input_1 with shape=(2,3,4,5) and input_2 with
    shape=(1,2,3), it is not allowed to use:

    out = slice_like(a, b) because ndim of input_1 is 4, and ndim of input_2
    is 3.

    The following is allowed in this situation:

    out = slice_like(a, b, axes=(0, 2))

    Example::

    x = 1., 2., 3., 4.],
    [ 5., 6., 7., 8.],
    [ 9., 10., 11., 12.


    y = 0., 0., 0.],
    [ 0., 0., 0.


    slice_like(x, y) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0, 1)) = 1., 2., 3.]
    [ 5., 6., 7.

    slice_like(x, y, axes=(0)) = 1., 2., 3., 4.]
    [ 5., 6., 7., 8.

    slice_like(x, y, axes=(-1)) = 1., 2., 3.]
    [ 5., 6., 7.]
    [ 9., 10., 11.



    Defined in src/operator/tensor/matrix_op.cc:L568

    returns

    org.apache.mxnet.NDArray

  391. abstract def smooth_l1(args: Any*): NDArrayFuncReturn

    Calculate Smooth L1 Loss(lhs, scalar) by summing

    ..

    Calculate Smooth L1 Loss(lhs, scalar) by summing

    .. math::

    f(x) =
    \begin{cases}
    (\sigma x)2/2,& \text{if }x < 1/\sigma2\\
    |x|-0.5/\sigma^2,& \text{otherwise}
    \end{cases}

    where :math:x is an element of the tensor *lhs* and :math:\sigma is the scalar.

    Example::

    smooth_l1([1, 2, 3, 4], scalar=1) = [0.5, 1.5, 2.5, 3.5]



    Defined in src/operator/tensor/elemwise_binary_scalar_op_extended.cc:L103

    returns

    org.apache.mxnet.NDArray

  392. abstract def smooth_l1(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Calculate Smooth L1 Loss(lhs, scalar) by summing

    ..

    Calculate Smooth L1 Loss(lhs, scalar) by summing

    .. math::

    f(x) =
    \begin{cases}
    (\sigma x)2/2,& \text{if }x < 1/\sigma2\\
    |x|-0.5/\sigma^2,& \text{otherwise}
    \end{cases}

    where :math:x is an element of the tensor *lhs* and :math:\sigma is the scalar.

    Example::

    smooth_l1([1, 2, 3, 4], scalar=1) = [0.5, 1.5, 2.5, 3.5]



    Defined in src/operator/tensor/elemwise_binary_scalar_op_extended.cc:L103

    returns

    org.apache.mxnet.NDArray

  393. abstract def softmax(args: Any*): NDArrayFuncReturn

    Applies the softmax function.

    The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1.

    ..

    Applies the softmax function.

    The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1.

    .. math::
    softmax(\mathbf{z/t})_j = \frac{e{z_j/t}}{\sum_{k=1}K e^{z_k/t}}

    for :math:j = 1, ..., K

    t is the temperature parameter in softmax function. By default, t equals 1.0

    Example::

    x = 1. 1. 1.]
    [ 1. 1. 1.


    softmax(x,axis=0) = 0.5 0.5 0.5]
    [ 0.5 0.5 0.5


    softmax(x,axis=1) = 0.33333334, 0.33333334, 0.33333334],
    [ 0.33333334, 0.33333334, 0.33333334




    Defined in src/operator/nn/softmax.cc:L98

    returns

    org.apache.mxnet.NDArray

  394. abstract def softmax(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Applies the softmax function.

    The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1.

    ..

    Applies the softmax function.

    The resulting array contains elements in the range (0,1) and the elements along the given axis sum up to 1.

    .. math::
    softmax(\mathbf{z/t})_j = \frac{e{z_j/t}}{\sum_{k=1}K e^{z_k/t}}

    for :math:j = 1, ..., K

    t is the temperature parameter in softmax function. By default, t equals 1.0

    Example::

    x = 1. 1. 1.]
    [ 1. 1. 1.


    softmax(x,axis=0) = 0.5 0.5 0.5]
    [ 0.5 0.5 0.5


    softmax(x,axis=1) = 0.33333334, 0.33333334, 0.33333334],
    [ 0.33333334, 0.33333334, 0.33333334




    Defined in src/operator/nn/softmax.cc:L98

    returns

    org.apache.mxnet.NDArray

  395. abstract def softmax_cross_entropy(args: Any*): NDArrayFuncReturn

    Calculate cross entropy of softmax output and one-hot label.

    - This operator computes the cross entropy in two steps:

    Calculate cross entropy of softmax output and one-hot label.

    - This operator computes the cross entropy in two steps:

    • Applies softmax function on the input array.
    • Computes and returns the cross entropy loss between the softmax output and the labels.

      - The softmax function and cross entropy loss is given by:

    • Softmax Function:

      .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}

    • Cross Entropy Function:

      .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)

      Example::

      x = 2, 3],
      [11, 7, 5


      label = [2, 0]

      softmax(x) = 0.24472848, 0.66524094],
      [0.97962922, 0.01794253, 0.00242826


      softmax_cross_entropy(data, label) = - log(0.66524084) - log(0.97962922) = 0.4281871



      Defined in src/operator/loss_binary_op.cc:L59
    returns

    org.apache.mxnet.NDArray

  396. abstract def softmax_cross_entropy(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Calculate cross entropy of softmax output and one-hot label.

    - This operator computes the cross entropy in two steps:

    Calculate cross entropy of softmax output and one-hot label.

    - This operator computes the cross entropy in two steps:

    • Applies softmax function on the input array.
    • Computes and returns the cross entropy loss between the softmax output and the labels.

      - The softmax function and cross entropy loss is given by:

    • Softmax Function:

      .. math:: \text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}

    • Cross Entropy Function:

      .. math:: \text{CE(label, output)} = - \sum_i \text{label}_i \log(\text{output}_i)

      Example::

      x = 2, 3],
      [11, 7, 5


      label = [2, 0]

      softmax(x) = 0.24472848, 0.66524094],
      [0.97962922, 0.01794253, 0.00242826


      softmax_cross_entropy(data, label) = - log(0.66524084) - log(0.97962922) = 0.4281871



      Defined in src/operator/loss_binary_op.cc:L59
    returns

    org.apache.mxnet.NDArray

  397. abstract def softsign(args: Any*): NDArrayFuncReturn

    Computes softsign of x element-wise.

    ..

    Computes softsign of x element-wise.

    .. math::
    y = x / (1 + abs(x))

    The storage type of softsign output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L145

    returns

    org.apache.mxnet.NDArray

  398. abstract def softsign(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Computes softsign of x element-wise.

    ..

    Computes softsign of x element-wise.

    .. math::
    y = x / (1 + abs(x))

    The storage type of softsign output is always dense



    Defined in src/operator/tensor/elemwise_unary_op_basic.cc:L145

    returns

    org.apache.mxnet.NDArray

  399. abstract def sort(args: Any*): NDArrayFuncReturn

    Returns a sorted copy of an input array along the given axis.

    Examples::

    x = 1, 4],
    [ 3, 1


    // sorts along the last axis
    sort(x) = 1., 4.],
    [ 1., 3.


    // flattens and then sorts
    sort(x) = [ 1., 1., 3., 4.]

    // sorts along the first axis
    sort(x, axis=0) = 1., 1.],
    [ 3., 4.


    // in a descend order
    sort(x, is_ascend=0) = 4., 1.],
    [ 3., 1.




    Defined in src/operator/tensor/ordering_op.cc:L126

    Returns a sorted copy of an input array along the given axis.

    Examples::

    x = 1, 4],
    [ 3, 1


    // sorts along the last axis
    sort(x) = 1., 4.],
    [ 1., 3.


    // flattens and then sorts
    sort(x) = [ 1., 1., 3., 4.]

    // sorts along the first axis
    sort(x, axis=0) = 1., 1.],
    [ 3., 4.


    // in a descend order
    sort(x, is_ascend=0) = 4., 1.],
    [ 3., 1.




    Defined in src/operator/tensor/ordering_op.cc:L126

    returns

    org.apache.mxnet.NDArray

  400. abstract def sort(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Returns a sorted copy of an input array along the given axis.

    Examples::

    x = 1, 4],
    [ 3, 1


    // sorts along the last axis
    sort(x) = 1., 4.],
    [ 1., 3.


    // flattens and then sorts
    sort(x) = [ 1., 1., 3., 4.]

    // sorts along the first axis
    sort(x, axis=0) = 1., 1.],
    [ 3., 4.


    // in a descend order
    sort(x, is_ascend=0) = 4., 1.],
    [ 3., 1.




    Defined in src/operator/tensor/ordering_op.cc:L126

    Returns a sorted copy of an input array along the given axis.

    Examples::

    x = 1, 4],
    [ 3, 1


    // sorts along the last axis
    sort(x) = 1., 4.],
    [ 1., 3.


    // flattens and then sorts
    sort(x) = [ 1., 1., 3., 4.]

    // sorts along the first axis
    sort(x, axis=0) = 1., 1.],
    [ 3., 4.


    // in a descend order
    sort(x, is_ascend=0) = 4., 1.],
    [ 3., 1.




    Defined in src/operator/tensor/ordering_op.cc:L126

    returns

    org.apache.mxnet.NDArray

  401. abstract def split(args: Any*): NDArrayFuncReturn

    Splits an array along a particular axis into multiple sub-arrays.

    ..

    Splits an array along a particular axis into multiple sub-arrays.

    .. note:: SliceChannel is deprecated. Use split instead.

    **Note** that num_outputs should evenly divide the length of the axis
    along which to split the array.

    Example::

    x = 1.]
    [ 2.]]
    3.]
    [ 4.

    5.]
    [ 6.
    ]
    x.shape = (3, 2, 1)

    y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1)
    y = 1.]]
    3.
    5.]

    2.]]
    4.
    6.]

    y[0].shape = (3, 1, 1)

    z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.


    z[0].shape = (1, 2, 1)

    squeeze_axis=1 removes the axis with length 1 from the shapes of the output arrays.
    **Note** that setting squeeze_axis to 1 removes axis with length 1 only
    along the axis which it is split.
    Also squeeze_axis can be set to true only if input.shape[axis] == num_outputs.

    Example::

    z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.

    z[0].shape = (2 ,1 )



    Defined in src/operator/slice_channel.cc:L107

    returns

    org.apache.mxnet.NDArray

  402. abstract def split(kwargs: Map[String, Any] = null)(args: Any*): NDArrayFuncReturn

    Splits an array along a particular axis into multiple sub-arrays.

    ..

    Splits an array along a particular axis into multiple sub-arrays.

    .. note:: SliceChannel is deprecated. Use split instead.

    **Note** that num_outputs should evenly divide the length of the axis
    along which to split the array.

    Example::

    x = 1.]
    [ 2.]]
    3.]
    [ 4.

    5.]
    [ 6.
    ]
    x.shape = (3, 2, 1)

    y = split(x, axis=1, num_outputs=2) // a list of 2 arrays with shape (3, 1, 1)
    y = 1.]]
    3.
    5.]

    2.]]
    4.
    6.]

    y[0].shape = (3, 1, 1)

    z = split(x, axis=0, num_outputs=3) // a list of 3 arrays with shape (1, 2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.


    z[0].shape = (1, 2, 1)

    squeeze_axis=1 removes the axis with length 1 from the shapes of the output arrays.
    **Note** that setting squeeze_axis to 1 removes axis with length 1 only
    along the axis which it is split.
    Also squeeze_axis can be set to true only if input.shape[axis] == num_outputs.

    Example::

    z = split(x, axis=0, num_outputs=3, squeeze_axis=1) // a list of 3 arrays with shape (2, 1)
    z = 1.]
    [ 2.


    3.]
    [ 4.


    5.]
    [ 6.

    z[0].shape = (2 ,1 )



    Defined in src/operator/slice_channel.cc:L107

    returns

    org.apache.mxnet.NDArray

  403. abstract def sqrt(args: Any*): NDArrayFuncReturn