Multi Threaded Inference API
A long standing request from MXNet users has been to invoke parallel inference on a model from multiple threads while sharing the parameters. With this use case in mind, the threadsafe version of CachedOp was added to provide a way for customers to do multi-threaded inference for MXNet users. This doc attempts to do the following: 1. Discuss the current state of thread safety in MXNet 2. Explain how one can use C API and thread safe version of cached op, along with CPP package to achieve iultithreaded inference. This will be useful for end users as well as frontend developers of different language bindings 3. Discuss the limitations of the above approach 4. Future Work
Current state of Thread Safety in MXNet
Examining the current state of thread safety in MXNet we can arrive to the following conclusion:
- MXNet Dependency Engine is thread safe (except for WaitToRead invoked inside a spawned thread. Please see Limitations section)
- Graph Executor which is Module/Symbolic/C Predict API backend is not thread safe
- Cached Op (Gluon Backend) is not thread safe
The CachedOpThreadSafe and corresponding C APIs were added to address point 3 above and provide a way for MXNet users to do multi-threaded inference.
/*! * \brief create cached operator, allows to choose thread_safe version * of cachedop */ MXNET_DLL int MXCreateCachedOpEX(SymbolHandle handle, int num_flags, const char** keys, const char** vals, CachedOpHandle *out, bool thread_safe DEFAULT(false));
Multithreaded inference in MXNet with C API and CPP Package
To complete this tutorial you need to: - Learn the basics about MXNet C++ API - Build MXNet from source with make/cmake - Build the multi-threaded inference example
Setup the MXNet C++ API
To use the C++ API in MXNet, you need to build MXNet from source with C++ package. Please follow the built from source guide, and C++ Package documentation
The summary of those two documents is that you need to build MXNet from source with
USE_CPP_PACKAGE flag set to 1. For example:
make -j USE_CPP_PACKAGE=1 USE_CUDA=1 USE_CUDNN=1.
This example requires a build with CUDA and CUDNN.
Build the example
If you have built mxnet from source with make, then do the following:
$ cd example/multi_threaded_inference $ make
If you have built mxnet from source with cmake, please uncomment the specific lines for cmake build or set the following environment variables:
MKLDNN_BUILD_DIR (default is $(MXNET_ROOT)/3rdparty/mkldnn/build),
MKLDNN_INCLUDE_DIR (default is $(MXNET_ROOT)/3rdparty/mkldnn/include),
MXNET_LIB_DIR (default is $(MXNET_ROOT)/lib).
Download the model and run multi threaded inference example
To download a model use the
get_model.py script. This downloads a model to run inference.
python3 get_model.py --model <model_name>
python3 get_model.py --model imagenet1k-inception-bn
Only the supported models with
get_model.py work with multi threaded inference.
To run the multi threaded inference example:
$ export LD_LIBRARY_PATH=<MXNET_LIB_DIR>:$LD_LIBRARY_PATH
$ ./multi_threaded_inference [model_name] [is_gpu] [file_names]
./multi_threaded_inference imagenet1k-inception-bn 2 1 grace_hopper.jpg dog.jpg
The above script spawns 2 threads, shares the same cachedop and params among two threads, and runs inference on GPU. It returns the inference results in the order in which files are provided.
NOTE: This example is to demonstrate the multi-threaded-inference with cached op. The inference results work well only with specific models (e.g. imagenet1k-inception-bn). The results may not necessarily be very accurate because of different preprocessing step required etc.
Code walkthrough multi-threaded inference with CachedOp
The multi threaded inference example (
multi_threaded_inference.cc) involves the following steps:
- Parse arguments and load input image into ndarray
- Prepare input data and load parameters, copying data to a specific context
- Preparing arguments to pass to the CachedOp and calling C API to create cached op
- Prepare lambda function which will run in spawned threads. Call C API to invoke cached op within the lambda function.
- Spawn multiple threads and wait for all threads to complete.
- Post process data to obtain inference results and cleanup.
Step 1: Parse arguments and load input image into ndarray
The above code parses arguments, loads the image file into a ndarray with a specific shape. There are a few things that are set by default and not configurable. For example,
static_shape are by default set to true.
Step 2: Prepare input data and load parameters, copying data to a specific context
The above code loads params and copies input data and params to specific context.
Step 3: Preparing arguments to pass to the CachedOp and calling C API to create cached op
The above code prepares
flag_val_cstrs to be passed the Cached op.
The C API call is made with
MXCreateCachedOpEX. This will lead to creation of thread safe cached
op since the
thread_safe (which is the last parameter to
MXCreateCachedOpEX) is set to
true. When this is set to false, it will invoke CachedOp instead of CachedOpThreadSafe.
Step 4: Prepare lambda function which will run in spawned threads
The above creates the lambda function taking the thread number as the argument.
random_sleep is set it will sleep for a random number (secs) generated between 0 to 5 seconds.
Following this, it invokes
MXInvokeCachedOpEx(from the hdl it determines whether to invoke cached op threadsafe version or not).
When this is set to false, it will invoke CachedOp instead of CachedOpThreadSafe.
Step 5: Spawn multiple threads and wait for all threads to complete
Spawns multiple threads, joins and waits to wait for all ops to complete. The other alternative is to wait in the thread on the output ndarray and remove the WaitAll after join.
Step 6: Post process data to obtain inference results and cleanup
The above code outputs results for different threads and cleans up the thread safe cached op.
- Only operators tested with the existing model coverage are supported. Other operators and operator types (stateful operators, custom operators are not supported. Existing model coverage is as follows (this list will keep growing as we test more models with different model types):
- Only dense storage types are supported currently.
- Multi GPU Inference not supported currently.
- Instantiating multiple instances of SymbolBlockThreadSafe is not supported. Can run parallel inference only on one model per process.
- dynamic shapes not supported in thread safe cached op.
- Bulking of ops is not supported.
- This only supports inference use cases currently, training use cases are not supported.
- Graph rewrites with subgraph API currently not supported.
- There is currently no frontend API support to run multi threaded inference. Users can use CreateCachedOpEX and InvokeCachedOp in combination with the CPP frontend to run multi-threaded inference as of today.
- Multi threaded inference with threaded engine with Module/Symbolic API and C Predict API are not currently supported.
- Exception thrown with
wait_to_readin individual threads can cause issues. Calling invoke from each thread and calling WaitAll after thread joins should still work fine.
- Tested only on environments supported by CI. This means that MacOS is not supported.
Future work includes Increasing model coverage and addressing most of the limitations mentioned under Current Limitations except the training use case. For more updates, please subscribe to discussion activity on RFC: https://github.com/apache/incubator-mxnet/issues/16431.