►Ndmlc | Namespace for dmlc |
►Nio | |
CFileInfo | Use to store file information |
CFileSystem | File system system interface |
CURI | Common data structure for URI |
►Nlua_stack | |
CHandler | |
►Nparameter | |
CFieldEntry< mxnet::TShape > | |
►Nserializer | Internal namespace for serializers |
CHandler | Generic serialization handler |
Carray_view | Read only data structure to reference continuous memory region of array. Provide unified view for vector, array and C style array. This data structure do not guarantee aliveness of referenced array |
CBlockingQueueThread | Blocking queue thread class |
CConcurrentBlockingQueue | Cocurrent blocking queue |
►CConfig | Class for config parser |
CConfigIterator | Iterator class |
CDataIter | Data iterator interface this is not a C++ style iterator, but nice for data pulling:) This interface is used to pull in the data The system can do some useful tricks for you like pre-fetching from disk and pre-computation |
CFunctionRegEntryBase | Common base class for function registry |
Chas_saveload | Whether a type have save/load function |
CIfThenElseType | Template to select type based on condition For example, IfThenElseType<true, int, float>::Type will give int |
►CInputSplit | Input split creates that allows reading of records from split of data, independent part that covers all the dataset |
CBlob | Blob of memory region |
CInputSplitShuffle | Class to construct input split with global shuffling |
Cis_arithmetic | Whether a type is arithemetic type |
Cis_floating_point | Whether a type is floating point type |
Cis_integral | Whether a type is integer type |
Cis_pod | Whether a type is pod type |
Cistream | Std::istream class that can can wrap Stream objects, can use istream with that output to underlying Stream |
CJSONObjectReadHelper | Helper class to read JSON into a class or struct object |
CJSONReader | Lightweight JSON Reader to read any STL compositions and structs. The user need to know the schema of the |
CJSONWriter | Lightweight json to write any STL compositions |
CLuaRef | Reference to lua object |
CLuaState | A Lua state |
CManualEvent | Simple manual-reset event gate which remains open after signalled |
CMemoryFixedSizeStream | A Stream that operates on fixed region of memory This class allows us to read/write from/to a fixed memory region |
CMemoryPool | A memory pool that allocate memory of fixed size and alignment |
CMemoryStringStream | A in memory stream that is backed by std::string. This class allows us to read/write from/to a std::string |
Cnullopt_t | Dummy type for assign null to optional |
COMPException | OMP Exception class catches, saves and rethrows exception from OMP blocks |
Coptional | C++17 compatible optional class |
Costream | Std::ostream class that can can wrap Stream objects, can use ostream with that output to underlying Stream |
CParser | Parser interface that parses input data used to load dmlc data format into your own data format Difference between RowBlockIter and Parser: RowBlockIter caches the data internally that can be used to iterate the dataset multiple times, Parser holds very limited internal state and was usually used to read data only once |
CParserFactoryReg | Registry entry of parser factory |
CRecordIOChunkReader | Reader of binary recordio from Blob returned by InputSplit This class divides the blob into several independent parts specified by caller, and read from one segment. The part reading can be used together with InputSplit::NextChunk for multi-threaded parsing(each thread take a RecordIOChunkReader) |
CRecordIOReader | Reader of binary recordio to reads in record from stream |
CRecordIOWriter | Writer of binary recordio binary format for recordio recordio format: magic lrecord data pad |
CRegistry | Registry class. Registry can be used to register global singletons. The most commonly use case are factory functions |
CRow | One row of training instance |
CRowBlock | Block of data, containing several rows in sparse matrix This is useful for (streaming-sxtyle) algorithms that scans through rows of data examples include: SGD, GD, L-BFGS, kmeans |
CRowBlockIter | Data structure that holds the data Row block iterator interface that gets RowBlocks Difference between RowBlockIter and Parser: RowBlockIter caches the data internally that can be used to iterate the dataset multiple times, Parser holds very limited internal state and was usually used to read data only once |
CScopedThread | Wrapper class to manage std::thread; uses RAII pattern to automatically join std::thread upon destruction |
CSeekStream | Interface of i/o stream that support seek |
CSerializable | Interface for serializable objects |
CSpinlock | Simple userspace spinlock implementation |
CStr2T | Interface class that defines a single method get() to convert a string into type T. Define template specialization of this class to define the conversion method for a particular type |
CStr2T< double > | Template specialization of Str2T<> interface for double type |
CStr2T< float > | Template specialization of Str2T<> interface for float type |
CStr2T< int32_t > | Template specialization of Str2T<> interface for signed 32-bit integer |
CStr2T< int64_t > | Template specialization of Str2T<> interface for signed 64-bit integer |
CStr2T< uint32_t > | Template specialization of Str2T<> interface for unsigned 32-bit integer |
CStr2T< uint64_t > | Template specialization of Str2T<> interface for unsigned 64-bit integer |
CStream | Interface of stream I/O for serialization |
CTemporaryDirectory | Manager class for temporary directories. Whenever a new TemporaryDirectory object is constructed, a temporary directory is created. The directory is deleted when the object is deleted or goes out of scope. Note: no symbolic links are allowed inside the temporary directory |
►CThreadedIter | Iterator that was backed by a thread to pull data eagerly from a single producer into a bounded buffer the consumer can pull the data at its own rate |
CProducer | Producer class interface that threaditer used as source to preduce the content |
►CThreadGroup | Thread lifecycle management group |
CThread | Lifecycle-managed thread (used by ThreadGroup) |
CThreadlocalAllocator | A thread local allocator that get memory from a threadlocal memory pool. This is suitable to allocate objects that do not cross thread |
CThreadlocalSharedPtr | Shared pointer like type that allocate object from a threadlocal object pool. This object is not thread-safe but can be faster than shared_ptr in certain usecases |
CThreadLocalStore | A threadlocal store to store threadlocal variables. Will return a thread local singleton of type T |
CTimerThread | Managed timer thread |
Ctype_name_helper | Helper class to construct a string that represents type name |
Ctype_name_helper< mxnet::Tuple< T > > | |
Ctype_name_helper< nnvm::Tuple< T > > | |
►Nmshadow | Overloaded + operator between half_t and bf16_t |
►Nexpr | Namespace for abstract expressions and expressions template, have no dependency on tensor.h, These data structure takes no charge in computations, they are only used to define operations and represent expression in a symbolic way |
CBinaryMapExp | Binary map expression lhs [op] rhs |
CBLASEngine | |
CBLASEngine< cpu, double > | |
CBLASEngine< cpu, float > | |
CBLASEngine< gpu, double > | |
CBLASEngine< gpu, float > | |
CBLASEngine< gpu, half::half_t > | |
CBroadcast1DExp | Broadcast Tensor1D into a higher dimension Tensor input: Tensor<Device,1>: ishape[0] output: Tensor<Device,dimdst> : oshape[dimcast] = ishape[0] |
CBroadcastScalarExp | Broadcast scalar into a higher dimension Tensor input: Tensor<Device,1>: ishape = {1} output: Tensor<Device, dimdst> : oshape[dimcast] = ishape[0] |
CBroadcastWithAxisExp | Broadcasting the tensor in the given axis. If keepdim is off, insert the broadcasting dim after axis. Otherwise broadcasting axis |
CBroadcastWithMultiAxesExp | Broadcasting the tensor in multiple axes. The dimension of the source tensor in the given axes must be 1 |
CChannelPoolingExp | Channel pooling expression, do reduction over (local nearby) channels, used to implement local response normalization |
CChannelUnpoolingExp | Channel pooling expression, do reduction over (local nearby) channels, used to implement local response normalization |
CComplexBinaryMapExp | Binary map expression lhs [op] rhs where lhs and rhs are complex tensors |
CComplexUnitaryExp | Compute conj(src) where src is a complex tensor |
CConcatExp | Concat expression, concat two tensor's channel |
CCroppingExp | Crop expression, cut off the boundary region, reverse operation of padding |
CDotEngine | |
CDotEngine< SV, xpu, 1, 1, 2, false, transpose_right, DType > | |
CDotEngine< SV, xpu, 2, 1, 1, true, false, DType > | |
CDotEngine< SV, xpu, 2, 2, 2, transpose_left, transpose_right, DType > | |
CDotExp | Matrix multiplication expression dot(lhs[.T], rhs[.T]) |
CExp | Defines how expression exp can be evaluated and stored into dst |
CExpComplexEngine | Some engine that evaluate complex expression |
CExpComplexEngine< SV, Tensor< Device, 1, DType >, ReduceTo1DExp< SrcExp, DType, Reducer, 1 >, DType > | |
CExpComplexEngine< SV, Tensor< Device, 1, DType >, ReduceTo1DExp< SrcExp, DType, Reducer, m_dimkeep >, DType > | |
CExpComplexEngine< SV, Tensor< Device, dim, DType >, DotExp< Tensor< Device, ldim, DType >, Tensor< Device, rdim, DType >, ltrans, rtrans, DType >, DType > | |
CExpEngine | Engine that dispatches simple operations |
CExpInfo | Static type inference template, used to get the dimension of each expression, if ExpInfo<E>::kDim == -1, this means here are mismatch in expression if (ExpInfo<E>::kDevMask & cpu::kDevMask) != 0, this means this expression can be assigned to cpu |
CExpInfo< BinaryMapExp< OP, TA, TB, DType, etype > > | |
CExpInfo< ComplexBinaryMapExp< calctype, OP, TA, TB, DType, etype > > | |
CExpInfo< ComplexUnitaryExp< calctype, OP, TA, DType, etype > > | |
CExpInfo< ConcatExp< LhsExp, RhsExp, Device, DType, srcdim, dimsrc_m_cat > > | |
CExpInfo< FlipExp< SrcExp, Device, DType, srcdim > > | |
CExpInfo< ImplicitGEMMExp< LhsExp, RhsExp, DType > > | |
CExpInfo< MakeTensorExp< T, SrcExp, dim, DType > > | |
CExpInfo< MaskExp< IndexExp, SrcExp, DType > > | |
CExpInfo< MatChooseRowElementExp< SrcExp, IndexExp, DType > > | |
CExpInfo< MatFillRowElementExp< SrcExp, ValExp, IndexExp, DType > > | |
CExpInfo< OneHotEncodeExp< IndexExp, DType > > | |
CExpInfo< RangeExp< DType > > | |
CExpInfo< ScalarExp< DType > > | |
CExpInfo< SliceExExp< SrcExp, Device, DType, srcdim > > | |
CExpInfo< SliceExp< SrcExp, Device, DType, srcdim, dimsrc_m_slice > > | |
CExpInfo< TakeExp< IndexExp, SrcExp, DType > > | |
CExpInfo< TakeGradExp< IndexExp, SrcExp, DType > > | |
CExpInfo< Tensor< Device, dim, DType > > | |
CExpInfo< TernaryMapExp< OP, TA, TB, TC, DType, etype > > | |
CExpInfo< TransposeExp< E, DType > > | |
CExpInfo< TransposeIndicesExp< SrcExp, DType, dimsrc, etype > > | |
CExpInfo< TypecastExp< DstDType, SrcDType, EType, etype > > | |
CExpInfo< UnaryMapExp< OP, TA, DType, etype > > | |
CFlipExp | Slice expression, slice a tensor's channel |
CImplicitGEMMExp | Matrix multiplication |
CMakeTensorExp | General class that allows extension that makes tensors of some shape |
CMaskExp | Broadcast a mask and do element-wise multiplication |
CMatChooseRowElementExp | Make a choice of index in the lowest changing dimension |
CMatFillRowElementExp | Set value of a specific element in each line of the data matrix |
CMirroringExp | Mirror expression, mirror a image in width |
COneHotEncodeExp | Create a one-hot indicator array |
CPackColToPatchXExp | Reverse operation of UnpackPatchToCol, used to backprop gradient back this is a version supporting multiple images |
CPacketAlignCheck | |
CPacketAlignCheck< dim, BinaryMapExp< OP, TA, TB, DType, etype >, Arch > | |
CPacketAlignCheck< dim, ScalarExp< DType >, Arch > | |
CPacketAlignCheck< dim, Tensor< cpu, dim, DType >, Arch > | |
CPacketAlignCheck< dim, UnaryMapExp< OP, TA, DType, etype >, Arch > | |
CPacketCheck | Static check packet enable |
CPacketCheck< BinaryMapExp< OP, TA, TB, DType, etype >, Arch > | |
CPacketCheck< double, Arch > | |
CPacketCheck< float, Arch > | |
CPacketCheck< ScalarExp< DType >, Arch > | |
CPacketCheck< Tensor< cpu, dim, DType >, Arch > | |
CPacketCheck< UnaryMapExp< OP, TA, DType, etype >, Arch > | |
CPacketPlan | |
CPacketPlan< BinaryMapExp< OP, TA, TB, DType, etype >, DType, Arch > | |
CPacketPlan< ScalarExp< DType >, DType, Arch > | |
CPacketPlan< Tensor< Device, dim, DType >, DType, Arch > | |
CPacketPlan< UnaryMapExp< OP, TA, DType, etype >, DType, Arch > | |
CPaddingExp | Padding expression, pad a image with zeros |
CPlan | |
CPlan< BinaryMapExp< OP, TA, TB, DType, etype >, DType > | |
CPlan< Broadcast1DExp< SrcExp, DType, dimdst, 1 >, DType > | Execution plan of Broadcast1DExp |
CPlan< Broadcast1DExp< SrcExp, DType, dimdst, dimdst_m_cast >, DType > | |
CPlan< BroadcastScalarExp< SrcExp, DType, dimdst >, DType > | Execution plan of Broadcast1DExp |
CPlan< BroadcastWithAxisExp< SrcExp, DType, dimsrc, dimdst >, DType > | |
CPlan< BroadcastWithMultiAxesExp< SrcExp, DType, dimsrc >, DType > | |
CPlan< ChannelPoolingExp< Reducer, SrcExp, DType, srcdim >, DType > | |
CPlan< ChannelUnpoolingExp< Reducer, SrcExp, DType, srcdim >, DType > | |
CPlan< ComplexBinaryMapExp< op::complex::kBinaryCC, OP, TA, TB, DType, etype >, DType > | |
CPlan< ComplexBinaryMapExp< op::complex::kBinaryCR, OP, TA, TB, DType, etype >, DType > | |
CPlan< ComplexBinaryMapExp< op::complex::kBinaryRC, OP, TA, TB, DType, etype >, DType > | |
CPlan< ComplexUnitaryExp< op::complex::kUnitaryC2C, OP, TA, DType, etype >, DType > | |
CPlan< ComplexUnitaryExp< op::complex::kUnitaryC2R, OP, TA, DType, etype >, DType > | |
CPlan< ComplexUnitaryExp< op::complex::kUnitaryR2C, OP, TA, DType, etype >, DType > | |
CPlan< ConcatExp< LhsExp, RhsExp, Device, DType, srcdim, 1 >, DType > | |
CPlan< ConcatExp< LhsExp, RhsExp, Device, DType, srcdim, dimsrc_m_cat >, DType > | |
CPlan< CroppingExp< SrcExp, DType, srcdim >, DType > | |
CPlan< FlipExp< SrcExp, Device, DType, srcdim >, DType > | |
CPlan< ImplicitGEMMExp< LhsExp, RhsExp, DType >, DType > | |
CPlan< MakeTensorExp< SubType, SrcExp, dim, DType >, DType > | |
CPlan< MaskExp< IndexExp, SrcExp, DType >, DType > | |
CPlan< MatChooseRowElementExp< SrcExp, IndexExp, DType >, DType > | |
CPlan< MatFillRowElementExp< SrcExp, ValExp, IndexExp, DType >, DType > | |
CPlan< MirroringExp< SrcExp, DType, srcdim >, DType > | |
CPlan< OneHotEncodeExp< IndexExp, DType >, DType > | |
CPlan< PackColToPatchXExp< SrcExp, DType, dstdim >, DType > | |
CPlan< PaddingExp< SrcExp, DType, srcdim >, DType > | |
CPlan< PoolingExp< Reducer, SrcExp, DType, srcdim >, DType > | |
CPlan< RangeExp< DType >, DType > | |
CPlan< ReduceWithAxisExp< Reducer, SrcExp, DType, dimsrc, mask, dimdst >, DType > | |
CPlan< ReshapeExp< SrcExp, DType, dimdst, 1 >, DType > | |
CPlan< ReshapeExp< SrcExp, DType, dimdst, dimsrc >, DType > | |
CPlan< ScalarExp< DType >, DType > | |
CPlan< SliceExExp< SrcExp, Device, DType, srcdim >, DType > | |
CPlan< SliceExp< SrcExp, Device, DType, srcdim, 1 >, DType > | |
CPlan< SliceExp< SrcExp, Device, DType, srcdim, dimsrc_m_slice >, DType > | |
CPlan< SwapAxisExp< SrcExp, DType, dimsrc, 1, a2 >, DType > | |
CPlan< SwapAxisExp< SrcExp, DType, dimsrc, m_a1, a2 >, DType > | |
CPlan< TakeExp< IndexExp, SrcExp, DType >, DType > | |
CPlan< TakeGradExp< IndexExp, SrcExp, DType >, DType > | |
CPlan< Tensor< Device, 1, DType >, DType > | |
CPlan< Tensor< Device, dim, DType >, DType > | |
CPlan< TernaryMapExp< OP, TA, TB, TC, DType, etype >, DType > | |
CPlan< TransposeExExp< SrcExp, DType, dimsrc >, DType > | |
CPlan< TransposeExp< EType, DType >, DType > | |
CPlan< TransposeIndicesExp< SrcExp, DType, dimsrc, etype >, DType > | |
CPlan< TypecastExp< DstDType, SrcDType, EType, etype >, DstDType > | |
CPlan< UnaryMapExp< OP, TA, DType, etype >, DType > | |
CPlan< UnpackPatchToColXExp< SrcExp, DType, srcdim >, DType > | |
CPlan< UnPoolingExp< Reducer, SrcExp, DType, srcdim >, DType > | |
CPlan< UpSamplingNearestExp< SrcExp, DType, srcdim >, DType > | |
CPoolingExp | Pooling expression, do reduction over local patches of a image |
CRangeExp | Generate a range vector similar to python: range(start, stop[, step][, repeat]). If step is positive, the last element is the largest start + i * step less than stop If step is negative, the last element is the smallest start + i * step greater than stop. All elements are repeated for repeat times, e.g range(0, 4, 2, 3) –> 0, 0, 0, 2, 2, 2 |
CReduceTo1DExp | Reduction to 1 dimension tensor input: Tensor<Device,k>: ishape output: Tensor<Device,1> shape[0] = ishape[dimkeep]; |
CReduceWithAxisExp | Reduce out the dimension of src labeled by axis |
CReshapeExp | Reshape the content to another shape input: Tensor<Device,dimsrc>: ishape output: Tensor<Device,dimdst> ishape.Size() == oshape.Size() |
CRValueExp | Base class of all rvalues |
CScalarExp | Scalar expression |
CShapeCheck | Runtime shape checking template get the shape of an expression, report error if shape mismatch |
CShapeCheck< dim, BinaryMapExp< OP, TA, TB, DType, etype > > | |
CShapeCheck< dim, ComplexBinaryMapExp< calctype, OP, TA, TB, DType, etype > > | |
CShapeCheck< dim, ComplexUnitaryExp< calctype, OP, TA, DType, etype > > | |
CShapeCheck< dim, ImplicitGEMMExp< LhsExp, RhsExp, DType > > | |
CShapeCheck< dim, MakeTensorExp< T, SrcExp, dim, DType > > | |
CShapeCheck< dim, MaskExp< IndexExp, SrcExp, DType > > | |
CShapeCheck< dim, MatChooseRowElementExp< SrcExp, IndexExp, DType > > | |
CShapeCheck< dim, MatFillRowElementExp< SrcExp, ValExp, IndexExp, DType > > | |
CShapeCheck< dim, OneHotEncodeExp< IndexExp, DType > > | |
CShapeCheck< dim, RangeExp< DType > > | |
CShapeCheck< dim, ScalarExp< DType > > | |
CShapeCheck< dim, TakeExp< IndexExp, SrcExp, DType > > | |
CShapeCheck< dim, TakeGradExp< IndexExp, SrcExp, DType > > | |
CShapeCheck< dim, Tensor< Device, dim, DType > > | |
CShapeCheck< dim, TernaryMapExp< OP, TA, TB, TC, DType, etype > > | |
CShapeCheck< dim, TransposeExp< E, DType > > | |
CShapeCheck< dim, TransposeIndicesExp< SrcExp, DType, dimsrc, etype > > | |
CShapeCheck< dim, TypecastExp< DstDType, SrcDType, EType, etype > > | |
CShapeCheck< dim, UnaryMapExp< OP, TA, DType, etype > > | |
CShapeCheck< srcdim, ConcatExp< LhsExp, RhsExp, Device, DType, srcdim, dimsrc_m_cat > > | |
CShapeCheck< srcdim, FlipExp< SrcExp, Device, DType, srcdim > > | |
CShapeCheck< srcdim, SliceExExp< SrcExp, Device, DType, srcdim > > | |
CShapeCheck< srcdim, SliceExp< SrcExp, Device, DType, srcdim, dimsrc_m_slice > > | |
CSliceExExp | Slice expression, slice a tensor's channel |
CSliceExp | Slice expression, slice a tensor's channel |
CStreamInfo | |
CStreamInfo< Device, ConcatExp< LhsExp, RhsExp, Device, DType, srcdim, dimsrc_m_cat > > | |
CStreamInfo< Device, FlipExp< SrcExp, Device, DType, srcdim > > | |
CStreamInfo< Device, SliceExExp< SrcExp, Device, DType, srcdim > > | |
CStreamInfo< Device, SliceExp< SrcExp, Device, DType, srcdim, dimsrc_m_slice > > | |
CStreamInfo< Device, Tensor< Device, dim, DType > > | |
CSwapAxisExp | Swap two axis of a tensor input: Tensor<Device,dim>: ishape output: Tensor<Device,dimdst> oshape[a1],oshape[a2] = ishape[a2],oshape[a1] |
CTakeExp | Take a column from a matrix |
CTakeGradExp | Calculate embedding gradient |
CTernaryMapExp | Ternary map expression |
CTransposeExExp | Transpose axes of a tensor input: Tensor<Device,dim>: ishape output: Tensor<Device,dimdst> oshape[a1],oshape[a2] = ishape[a2],oshape[a1] |
CTransposeExp | Represent a transpose expression of a container |
CTransposeIndicesExp | Transform contiguous indices of the source tensor to indices of the transposed tensor. input: Tensor<Device, k>: ishape output: Tensor<Device, k>: oshape = ishape |
CTypecastExp | Typecast expression, cast the type of elements |
CTypeCheck | Template to do type check |
CTypeCheckPass | Used to help static type check |
CTypeCheckPass< false > | |
CTypeCheckPass< true > | |
CUnaryMapExp | Unary map expression op(src) |
CUnpackPatchToColXExp | Unpack local (overlap) patches of image to column of mat, can be used to implement convolution, this expression allow unpack of a batch this is a version support unpacking multiple images after getting unpacked mat, we can use: output = dot(weight, mat) to get covolved results, the relations: |
CUnPoolingExp | Unpooling expr reverse operation of pooling, used to pass gradient back |
CUpSamplingNearestExp | Nearest neighboor upsampling out(x, y) = in(int(x / scale_x), int(y / scale_y)) |
►Nop | Namespace for operators |
►Ncomplex | |
Cabs_square | |
Cconjugate | |
Cdiv | |
Cexchange | |
Cmul | |
Cpad_imag | |
Csum_real_imag | |
Ctoreal | |
Cdiv | Divide operator |
Cidentity | Identity function that maps a real number to it self |
Cminus | Minus operator |
Cmul | Mul operator |
Cplus | Plus operator |
Cright | Get rhs |
►Npacket | Namespace of packet math |
CAlignBytes | |
CPacket | Generic packet type |
CPacket< double, kSSE2 > | Vector real type for float |
CPacket< DType, kPlain > | |
CPacket< float, kSSE2 > | |
CPacketOp | Generic Packet operator |
CPacketOp< op::div, DType, Arch > | |
CPacketOp< op::identity, DType, Arch > | |
CPacketOp< op::minus, DType, Arch > | |
CPacketOp< op::mul, DType, Arch > | |
CPacketOp< op::plus, DType, Arch > | |
CSaver | |
CSaver< sv::saveto, TFloat, Arch > | |
►Nred | Namespace for potential reducer operations |
Cmaximum | Maximum reducer |
Cminimum | Minimum reducer |
Csum | Sum reducer |
►Nsv | Namespace for savers |
Cdivto | Divide to saver: /= |
Cminusto | Minus to saver: -= |
Cmulto | Multiply to saver: *= |
Cplusto | Save to saver: += |
Csaveto | Save to saver: = |
►Nutils | |
CIStream | Interface of stream I/O, used to serialize data, mshadow does not restricted to only this interface in SaveBinary/LoadBinary mshadow accept all class that implements Read and Write |
Ccpu | Device name CPU |
CDataType | |
CDataType< bfloat::bf16_t > | |
CDataType< bool > | |
CDataType< double > | |
CDataType< float > | |
CDataType< half::half2_t > | |
CDataType< half::half_t > | |
CDataType< int32_t > | |
CDataType< int64_t > | |
CDataType< int8_t > | |
CDataType< uint8_t > | |
Cgpu | Device name GPU |
CLayoutType | |
CLayoutType< kNCDHW > | |
CLayoutType< kNCHW > | |
CLayoutType< kNDHWC > | |
CLayoutType< kNHWC > | |
CMapExpCPUEngine | |
CMapExpCPUEngine< true, SV, Tensor< cpu, dim, DType >, dim, DType, E, etype > | |
CRandom | Random number generator |
CRandom< cpu, DType > | CPU random number generator |
CRandom< gpu, DType > | GPU random number generator |
CShape | Shape of a tensor |
CStream | Computaion stream structure, used for asynchronous computations |
CStream< gpu > | |
CTensor | General tensor |
CTensor< Device, 1, DType > | |
CTensorContainer | Tensor container that does memory allocation and resize like STL, use it to save the lines of FreeSpace in class. Do not abuse it, efficiency can come from pre-allocation and no re-allocation |
CTRValue | Tensor RValue, this is the super type of all kinds of possible tensors |
►Nmxnet | Namespace of mxnet |
►Ncommon | |
►Ncuda | Common utils for cuda |
CCublasType | Converts between C++ datatypes and enums/constants needed by cuBLAS |
CCublasType< double > | |
CCublasType< float > | |
CCublasType< int32_t > | |
CCublasType< mshadow::half::half_t > | |
CCublasType< uint8_t > | |
CDeviceStore | |
►Nhelper | Helper functions |
CUniqueIf | Helper for non-array type T |
CUniqueIf< T[]> | Helper for an array of unknown bound T |
CUniqueIf< T[kSize]> | Helper for an array of known bound T |
►Nrandom | |
CRandGenerator | |
►CRandGenerator< cpu, DType > | |
CImpl | |
►CRandGenerator< gpu, double > | |
CImpl | |
►CRandGenerator< gpu, DType > | |
CImpl | |
Ccsr_idx_check | Indices should be non-negative, less than the number of columns and in ascending order per row |
Ccsr_indptr_check | IndPtr should be non-negative, in non-decreasing order, start with 0 and end with value equal with size of indices |
CLazyAllocArray | |
CObjectPool | Object pool for fast allocation and deallocation |
CObjectPoolAllocatable | Helper trait class for easy allocation and deallocation |
Crsp_idx_check | Indices of RSPNDArray should be non-negative, less than the size of first dimension and in ascending order |
CStaticArray | Static array. This code is borrowed from struct Shape<ndim>, except that users can specify the type of the elements of the statically allocated array. The object instance of the struct is copyable between CPU and GPU |
►Ncpp | |
CAccuracy | |
CAdaDeltaOptimizer | |
CAdaGradOptimizer | |
CAdamOptimizer | |
CBilinear | |
CConstant | |
CContext | Context interface |
CDataBatch | Default object for holding a mini-batch of data and related information |
CDataIter | |
CEvalMetric | |
CExecutor | Executor interface |
CFactorScheduler | |
CFeedForward | |
CFeedForwardConfig | |
CInitializer | |
CKVStore | |
CLogLoss | |
CLRScheduler | Lr scheduler interface |
CMAE | |
CMonitor | Monitor interface |
CMSE | |
CMSRAPrelu | |
CMXDataIter | |
CMXDataIterBlob | |
CMXDataIterMap | |
CNDArray | NDArray interface |
CNDBlob | Struct to store NDArrayHandle |
CNormal | |
COne | |
COperator | Operator interface |
COpMap | OpMap instance holds a map of all the symbol creators so we can get symbol creators by name. This is used internally by Symbol and Operator |
COptimizer | Optimizer interface |
COptimizerRegistry | |
CPSNR | |
CRMSE | |
CRMSPropOptimizer | |
CSGDOptimizer | |
CShape | Dynamic shape class that can hold shape of arbirary dimension |
CSignumOptimizer | |
CSymBlob | Struct to store SymbolHandle |
CSymbol | Symbol interface |
CUniform | |
CXavier | |
CZero | |
►Nengine | Namespace of engine internal types |
CCallbackOnComplete | OnComplete Callback to the engine, called by AsyncFn when action completes |
CVar | Base class of engine variables |
►Nfeatures | |
CEnumNames | |
CLibInfo | |
►Nop | Namespace of arguments |
CEnvArguments | Environment arguments that is used by the function. These can be things like scalar arguments when add a value with scalar |
CGradFunctionArgument | Super class of all gradient function argument |
CInput0 | First input to the function |
CInput1 | Second input to the function |
COutputGrad | Gradient of output value |
COutputValue | Ouput value of the function to the function |
CSimpleOpRegEntry | Registry entry to register simple operators via functions |
CSimpleOpRegistry | Registry for TBlob functions |
►Nruntime | |
►Ndetail | |
Cfor_each_dispatcher | |
Cfor_each_dispatcher< true, I, F > | |
CMXNetValueCast | |
Ctyped_packed_call_dispatcher | |
Ctyped_packed_call_dispatcher< void > | |
Cunpack_call_dispatcher | |
Cunpack_call_dispatcher< R, 0, index, F > | |
Cunpack_call_dispatcher< void, 0, index, F > | |
CADT | Reference to algebraic data type objects |
CADTBuilder | A builder class that helps to incrementally build ADT |
CADTObj | An object representing a structure or enumeration |
Carray_type_info | The type trait indicates subclass of TVM's NDArray. For irrelavant classes, code = -1. For TVM NDArray itself, code = 0. All subclasses of NDArray should override code > 0 |
CEllipsisObj | Ellipsis |
Cextension_type_info | Type traits to mark if a class is tvm extension type |
CInplaceArrayBase | Base template for classes with array like memory layout |
CInteger | |
CIntegerObj | |
CMXNetArgs | Arguments into TVM functions |
CMXNetArgsSetter | |
CMXNetArgValue | A single argument value to PackedFunc. Containing both type_code and MXNetValue |
CMXNetDataType | Runtime primitive data type |
CMXNetPODValue_ | Internal base class to handle conversion to POD values |
CMXNetRetValue | Return Value container, Unlike MXNetArgValue, which only holds reference and do not delete the underlying container during destruction |
CObjAllocatorBase | Base class of object allocators that implements make. Use curiously recurring template pattern |
CObject | Base class of all object containers |
CObjectEqual | ObjectRef equal functor |
CObjectHash | ObjectRef hash functor |
CObjectPtr | A custom smart pointer for Object |
CObjectRef | Base class of all object reference |
CPackedFunc | Packed function is a type-erased function. The arguments are passed by packed format |
CRegistry | Registry for global function |
►CSimpleObjAllocator | |
CArrayHandler | |
CHandler | |
CSlice | |
CSliceObj | Slice |
CTypedPackedFunc | Please refer to TypedPackedFunc<R(Args..)> |
CTypedPackedFunc< R(Args...)> | A PackedFunc wrapper to provide typed function signature. It is backed by a PackedFunc internally |
►CArray | Array container of NodeRef in DSL graph. Array implements copy on write semantics, which means array is mutable but copy will happen when array is referenced in more than two places |
CValueConverter | |
CArrayNode | Array node content in array |
CBaseExpr | Managed reference to BaseExprNode |
CBaseExprNode | Base type of all the expressions |
CContext | Context information about the execution environment |
CDataBatch | DataBatch of NDArray, returned by Iterator |
CDataInst | Single data instance |
CDataIteratorReg | Registry entry for DataIterator factory functions |
CEngine | Dependency engine that schedules operations |
CExecutor | Executor of a computation graph. Executor can be created by Binding a symbol |
CFloatImm | Managed reference class to FloatImmNode |
CFloatImmNode | Constant floating point literals in the program |
CGPUAuxStream | Holds an auxiliary mshadow gpu stream that can be synced with a primary stream |
CIIterator | Iterator type |
►CImperative | Runtime functions for NDArray |
CAGInfo | |
CInspectorManager | This singleton struct mediates individual TensorInspector objects so that we can control the global behavior from each of them |
CIntImm | Managed reference class to IntImmNode |
CIntImmNode | Constant integer literals in the program |
CIterAdapter | Iterator adapter that adapts TIter to return another type |
CKVStore | Distributed key-value store |
CNDArray | Ndarray interface |
CNDArrayFunctionReg | Registry entry for NDArrayFunction |
COpContext | All the possible information needed by Operator.Forward and Backward This is the superset of RunContext. We use this data structure to bookkeep everything needed by Forward and Backward |
COperator | Operator interface. Operator defines basic operation unit of optimized computation graph in mxnet. This interface relies on pre-allocated memory in TBlob, the caller need to set the memory region in TBlob correctly before calling Forward and Backward |
COperatorProperty | OperatorProperty is a object that stores all information about Operator. It also contains method to generate context(device) specific operators |
COperatorPropertyReg | Registry entry for OperatorProperty factory functions |
COpStatePtr | Operator state. This is a pointer type, its content is mutable even if OpStatePtr is const |
CPrimExpr | Reference to PrimExprNode |
CPrimExprNode | Base node of all primitive expressions |
CResource | Resources used by mxnet operations. A resource is something special other than NDArray, but will still participate |
CResourceManager | Global resource manager |
CResourceRequest | The resources that can be requested by Operator |
CRunContext | Execution time context. The information needed in runtime for actual execution |
►CStorage | Storage manager across multiple devices |
CHandle | Storage handle |
CSyncedGPUAuxStream | Provides automatic coordination of an auxilary stream with a primary one. This object, upon construction, prepares an aux stream for use by syncing it with enqueued primary-stream work. Object destruction will sync again so future primary-stream work will wait on enqueued aux-stream work. If MXNET_GPU_WORKER_NSTREAMS == 1, then this defaults simply: the primary stream will equal the aux stream and the syncs will be executed as nops. See ./src/operator/cudnn/cudnn_convolution-inl.h for a usage example |
CTBlob | Tensor blob class that can be used to hold tensor of any dimension, any device and any data type, This is a weak type that can be used to transfer data through interface TBlob itself doesn't involve any arithmetic operations, but it can be converted to tensor of fixed dimension for further operations |
CTensorInspector | This class provides a unified interface to inspect the value of all data types including Tensor, TBlob, and NDArray. If the tensor resides on GPU, then it will be copied from GPU memory back to CPU memory to be operated on. Internally, all data types are stored as a TBlob object tb_ |
CTShape | A Shape class that is used to represent shape of each tensor |
CTuple | A dynamic sized array data structure that is optimized for storing small number of elements with same type |
►Nnnvm | |
CGraph | Symbolic computation graph. This is the intermediate representation for optimization pass |
►CIndexedGraph | Auxiliary data structure to index a graph. It maps Nodes in the graph to consecutive integers node_id. It also maps IndexedGraph::NodeEntry to consecutive integer entry_id. This allows storing properties of Node and NodeEntry into compact vector and quickly access them without resorting to hashmap |
CNode | Node data structure in IndexedGraph |
CNodeEntry | Data in the graph |
CLayout | |
CNode | Node represents an operation in a computation graph |
CNodeAttrs | The attributes of the current operation node. Usually are additional parameters like axis, |
CNodeEntry | Entry that represents output data from a node |
CNodeEntryEqual | This lets you use a NodeEntry as a key in a unordered_map of the form unordered_map<NodeEntry, ValueType, NodeEntryHash, NodeEntryEqual> |
CNodeEntryHash | This lets you use a NodeEntry as a key in a unordered_map of the form unordered_map<NodeEntry, ValueType, NodeEntryHash, NodeEntryEqual> |
COp | Operator structure |
COpGroup | Auxiliary data structure used to set attributes to a group of operators |
COpMap | A map data structure that takes Op* as key and returns ValueType |
CPassFunctionReg | Registry entry for pass functions |
CSymbol | Symbol is help class used to represent the operator node in Graph |
CTShape | A Shape class that is used to represent shape of each tensor |
CTuple | A dynamic sized array data structure that is optimized for storing small number of elements with same type |
►Nstd | |
Chash< dmlc::optional< T > > | Std hash function for optional |
Chash< mxnet::TShape > | Hash function for TShape |
Chash< mxnet::Tuple< T > > | Hash function for Tuple |
Chash< nnvm::TShape > | Hash function for TShape |
Chash< nnvm::Tuple< T > > | Hash function for Tuple |
CCustomOp | Class to hold custom operator registration |
CCustomOpSelector | |
CCustomPartitioner | An abstract class for subgraph property |
CCustomPass | An abstract class for graph passes |
CCustomStatefulOp | An abstract class for library authors creating stateful op custom library should override Forward and destructor, and has an option to implement Backward |
CCustomStatefulOpWrapper | StatefulOp wrapper class to pass to backend OpState |
CDLContext | A Device context for Tensor and operator |
CDLDataType | The data type the tensor can hold |
CDLManagedTensor | C Tensor object, manage memory of DLTensor. This data structure is intended to facilitate the borrowing of DLTensor by another framework. It is not meant to transfer the tensor. When the borrowing framework doesn't need the tensor, it should call the deleter to notify the host that the resource is no longer needed |
CDLTensor | Plain C Tensor object, does not manage memory |
Cdnnl_batch_normalization_desc_t | A descriptor of a Batch Normalization operation |
Cdnnl_binary_desc_t | A descriptor of a binary operation |
Cdnnl_blocking_desc_t | |
Cdnnl_convolution_desc_t | A descriptor of a convolution operation |
Cdnnl_eltwise_desc_t | A descriptor of a element-wise operation |
Cdnnl_engine | An opaque structure to describe an engine |
Cdnnl_exec_arg_t | |
Cdnnl_inner_product_desc_t | A descriptor of an inner product operation |
Cdnnl_layer_normalization_desc_t | A descriptor of a Layer Normalization operation |
Cdnnl_lrn_desc_t | A descriptor of a Local Response Normalization (LRN) operation |
Cdnnl_matmul_desc_t | |
Cdnnl_memory | |
Cdnnl_memory_desc_t | |
Cdnnl_memory_extra_desc_t | Description of extra information stored in memory |
Cdnnl_pooling_desc_t | A descriptor of a pooling operation |
Cdnnl_post_ops | An opaque structure for a chain of post operations |
Cdnnl_primitive | |
Cdnnl_primitive_attr | An opaque structure for primitive descriptor attributes |
Cdnnl_primitive_desc | An opaque structure to describe a primitive descriptor |
Cdnnl_primitive_desc_iterator | An opaque structure to describe a primitive descriptor iterator |
Cdnnl_resampling_desc_t | A descriptor of resampling operation |
Cdnnl_rnn_desc_t | A descriptor for an RNN operation |
Cdnnl_rnn_packed_desc_t | Description of tensor of packed weights for rnn |
Cdnnl_shuffle_desc_t | A descriptor of a shuffle operation |
Cdnnl_softmax_desc_t | A descriptor of a Softmax operation |
Cdnnl_stream | |
Cdnnl_version_t | |
Cdnnl_wino_desc_t | Description of tensor of weights for winograd 2x3 convolution |
CJsonParser | Functions used for parsing JSON |
CJsonVal | Definition of JSON objects |
CLibFeature | |
CMXCallbackList | |
CMXContext | Context info passing from MXNet OpContext dev_type is string repr of supported context, currently only "cpu" and "gpu" dev_id is the device index where the tensor locates |
CMXNetByteArray | Byte array type used to pass in byte array When kBytes is used as data type |
CMXNetValue | Union type of values being passed through API and function calls |
CMXSparse | |
CMXTensor | Tensor data structure used by custom operator |
CNativeOpInfo | |
CNDArrayOpInfo | |
COpResource | Provide resource APIs memory allocation mechanism to Forward/Backward functions |
CPassResource | |
CRegistry | Registry class to registers things (ops, properties) Singleton class |