contrib.ndarray¶
Functions
|
Applies a 2D adaptive average pooling over a 4D input with the shape of (NCHW). |
|
Batch normalization with ReLU fusion. |
|
Perform 2D resizing (upsampling or downsampling) for 4D input using bilinear interpolation. |
|
Connectionist Temporal Classification Loss. |
|
Compute 2-D deformable convolution on 4-D input. |
|
Performs deformable position-sensitive region-of-interest pooling on inputs. |
|
Compute 2-D modulated deformable convolution on 4-D input. |
|
Convert multibox detection predictions. |
|
Generate prior(anchor) boxes from data, sizes and ratios. |
|
Compute Multibox training targets |
|
Generate region proposals via RPN |
|
Performs region-of-interest pooling on inputs. |
|
Generate region proposals via RPN |
|
This operator takes a 4D feature map as an input array and region proposals as rois, then align the feature map over sub-regions of input and produces a fixed-sized output array. |
|
Performs Rotated ROI Align on the input array. |
|
Maps integer indices to vector representations (embeddings). |
|
Batch normalization. |
|
This operators implements the numpy.allclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False) |
|
Return an array with evenly spaced values. |
|
|
|
|
|
|
|
|
|
Compute bipartite matching. |
|
Given an n-d NDArray data, and a 1-d NDArray index, the operator produces an un-predeterminable shaped n-d NDArray out, which stands for the rows in x where the corresonding element in index is non-zero. |
|
Decode bounding boxes training target with normalized center offsets. |
|
Encode bounding boxes training target with normalized center offsets. |
|
Bounding box overlap of two arrays. |
|
Apply non-maximum suppression to input. |
|
Apply non-maximum suppression to input. |
|
Provide calibrated min/max for input histogram. |
|
Apply CountSketch to input: map a d-dimension data to k-dimension data” |
|
Connectionist Temporal Classification Loss. |
|
Dequantize the input tensor into a float tensor. |
|
This operator converts a CSR matrix whose values are edge Ids to an adjacency matrix whose values are ones. |
|
This operator samples sub-graph from a csr graph via an non-uniform probability. |
|
This operator samples sub-graphs from a csr graph via an uniform probability. |
|
This operator compacts a CSR matrix generated by dgl_csr_neighbor_uniform_sample and dgl_csr_neighbor_non_uniform_sample. |
|
This operator constructs an induced subgraph for a given set of vertices from a graph. |
|
Rescale the input by the square root of the channel dimension. |
|
This operator implements the edge_id function for a graph stored in a CSR matrix (the value of the CSR stores the edge Id of the graph). |
|
Apply 1D FFT to input” |
|
Number of stored values for a sparse tensor, including explicit zeros. |
|
This operator implements the gradient multiplier function. |
|
Update function for Group AdaGrad optimizer. |
|
Computes the log likelihood of a univariate Hawkes process. |
|
Apply 1D ifft to input” |
|
Returns an array of indexes of the input array. |
|
Copies the elements of a new_tensor into the old_tensor. |
|
Compute the matrix multiplication between the projections of queries and keys in multihead attention use as encoder-decoder. |
|
Compute the matrix multiplication between the projections of values and the attention weights in multihead attention use as encoder-decoder. |
|
Compute the matrix multiplication between the projections of queries and keys in multihead attention use as self attention. |
|
Compute the matrix multiplication between the projections of values and the attention weights in multihead attention use as self attention. |
|
Multiply matrices using 8-bit integers. |
|
Compute the maximum absolute value in a tensor of float32 fast on a CPU. |
|
This operator converts quantizes float32 to int8 while also banning -128. |
|
This operator converts a weight matrix in column-major format to intgemm’s internal fast representation of weight matrices. |
|
Index a weight matrix stored in intgemm’s weight format. |
|
This operators implements the quadratic function. |
|
Quantize a input tensor from float to out_type, with user-specified min_range and max_range. |
|
Quantize a input tensor from float to uint8_t. |
|
Quantize a input tensor from float to out_type, with user-specified min_calib_range and max_calib_range or the input range collected at runtime. |
|
Activation operator for input and output data type of int8. |
|
BatchNorm operator for input and output data type of int8. |
|
Joins input arrays along a given axis. |
|
Convolution operator for input, weight and bias data type of int8, and accumulates in type int32 for the output. |
|
elemwise_add operator for input dataA and input dataB data type of int8, |
|
Multiplies arguments int8 element-wise. |
|
Maps integer indices to int8 vector representations (embeddings). |
|
|
|
Fully Connected operator for input, weight and bias data type of int8, and accumulates in type int32 for the output. |
|
Pooling operator for input and output data type of int8. |
|
RNN operator for input data type of uint8. |
|
Given data that is quantized in int32 and the corresponding thresholds, requantize the data into int8 using min and max thresholds either calculated at runtime or from calibration. |
|
Straight-through-estimator of round(). |
|
Straight-through-estimator of sign(). |