Build MXNet from Source¶
This document explains how to build MXNet from source code. Building MXNet from source is a two step process.
- Build the MXNet shared library,
libmxnet.so, from C++ source files
- Install the language bindings for MXNet. MXNet supports the following languages:
You need C++ build tools and a BLAS library to build the MXNet shared library. If you want to run MXNet with GPUs, you will need to install NVDIA CUDA and cuDNN first.
C++ build tools¶
MXNet relies on the BLAS (Basic Linear Algebra Subprograms) library for numerical computations. Those can be extended with LAPACK (Linear Algebra Package), an additional set of mathematical functions.
MXNet supports multiple mathematical backends for computations on the CPU:
Usage of these are covered in more detail in the build configurations section.
Build Instructions by Operating System¶
Detailed instructions are provided per operating system. You may jump to those, but it is recommended that you continue reading to understand more general build from source options.
- Clone the MXNet project.
git clone --recursive https://github.com/apache/incubator-mxnet mxnet cd mxnet
There is a configuration file for make,
make/config.mk, that contains all the compilation options. You can edit it and then run
cmake is recommended for building MXNet (and is required to build with MKLDNN), however you may use
Math Library Selection¶
It is useful to consider your math library selection first.
The default order of choice for the libraries if found follows the path from the most
(recommended) to less performant backends.
The following lists show this order by library and
For desktop platforms (x86_64):
- MKL-DNN (submodule) |
- MKL |
- MKLML (downloaded) |
- Apple Accelerate |
USE_APPLE_ACCELERATE_IF_AVAILABLE| Mac only
- OpenBLAS |
BLAS| Options: Atlas, Open, MKL, Apple
USE_MKL_IF_AVAILABLE is set to False then MKLML and MKL-DNN will be disabled as well for configuration
For embedded platforms (all other and if cross compiled):
- OpenBLAS |
BLAS| Options: Atlas, Open, MKL, Apple
You can set the BLAS library explicitly by setting the BLAS variable to:
See the cmake/ChooseBLAS.cmake file for the options.
Intel’s MKL (Math Kernel Library) is one of the most powerful math libraries https://software.intel.com/en-us/mkl
It has following flavors:
MKL is a complete math library, containing all the functionality found in ATLAS, OpenBlas and LAPACK. It is free under community support licensing (https://software.intel.com/en-us/articles/free-mkl), but needs to be downloaded and installed manually.
MKLML is a subset of MKL. It contains a smaller number of functions to reduce the size of the download and reduce the number of dynamic libraries user needs.
MKL-DNN is a separate open-source library, it can be used separately from MKL or MKLML. It is shipped as a subrepo with MXNet source code (see 3rdparty/mkldnn or the MKL-DNN project)
Since the full MKL library is almost always faster than any other BLAS library it’s turned on by default,
however it needs to be downloaded and installed manually before doing
Register and download on the Intel performance libraries website.
Note: MKL is supported only for desktop builds and the framework itself supports the following hardware:
- Intel® Xeon Phi™ processor
- Intel® Xeon® processor
- Intel® Core™ processor family
- Intel Atom® processor
If you have a different processor you can still try to use MKL, but performance results are unpredictable.
Build MXNet with NCCL¶
- Download and install the latest NCCL library from NVIDIA.
- Note the directory path in which NCCL libraries and header files are installed.
- Ensure that the installation directory contains
- Ensure that the prerequisites for using NCCL such as Cuda libraries are met.
- Append the
config.mkfile with following, in addition to the CUDA related options.
echo "USE_NCCL=1" >> make/config.mk echo "USE_NCCP_PATH=path-to-nccl-installation-folder" >> make/config.mk cp make/config.mk .
- Run make command
- Follow the steps to install MXNet Python binding.
- Comment the following line in
@unittest.skip("Test requires NCCL library installed and enabled during build")
- Run test_nccl.py script as follows. The test should complete. It does not produce any output.
nosetests --verbose tests/python/gpu/test_nccl.py
Recommendation to get the best performance out of NCCL: It is recommended to set environment variable NCCL_LAUNCH_MODE to PARALLEL when using NCCL version 2.1 or newer.
Build MXNet with Language Packages¶
- To enable C++ package, just add
USE_CPP_PACKAGE=1when you run
-jruns multiple jobs against multi-core CPUs. Example using all cores on Linux:
- Build without using OpenCV:
- Build with both OpenBLAS, GPU, and OpenCV support:
make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
- Build on macOS with the default BLAS library (Apple Accelerate) and Clang installed with
xcode(OPENMP is disabled because it is not supported by the Apple version of Clang):
make -j USE_BLAS=apple USE_OPENCV=0 USE_OPENMP=0
- To use OpenMP on macOS you need to install the Clang compiler,
llvm(the one provided by Apple does not support OpenMP):
brew install llvm make -j USE_BLAS=apple USE_OPENMP=1
Installing MXNet Language Bindings¶
After building MXNet’s shared library, you can install other language bindings. (Except for C++. You need to build this when you build MXNet from source.)
The following table provides links to each language binding by operating system: