- A Beginner's Guide to Implementing Operators in MXNet Backend
- Convert from Caffe to MXNet
- MXNet on the Cloud
- Distributed Training in MXNet
- Environment Variables
- Using MXNet with Large Tensor Support
- Model Parallel
- Data Parallelism with Multiple CPU/GPUs on MXNet
- Create New Operators
- NNPACK for Multi-Core CPU Support in MXNet
- Some Tips for Improving MXNet Performance
- Create a Dataset Using RecordIO
- Use data from S3 for training
- MXNet Security Best Practices
- Deep Learning at the Edge
- Visualize Neural Networks
- Why MXNet came to be?
Reporting a security vulnerability
The Apache Software Foundation takes a very active stance in eliminating security problems and denial of service attacks against its products.
We strongly encourage folks to report such problems to our private security mailing list first, before disclosing them in a public forum.
Please note that the security mailing list should only be used for reporting undisclosed security vulnerabilities and managing the process of fixing such vulnerabilities. We cannot accept regular bug reports or other queries at this address. All mail sent to this address that does not relate to an undisclosed security problem in our source code will be ignored.
- if a vulnerability applies to your particular application
- obtaining further information on a published vulnerability
- availability of patches and/or new releases should be addressed to the users mailing list. Please see the mailing lists page for details of how to subscribe.
MXNet Security Best Practices
MXNet framework has no built-in security protections. It assumes that the MXNet entities involved in model training and inferencing (hosting) are fully trusted. It also assumes that their communications cannot be eavesdropped or tampered with. MXNet consumers shall ensure that the above assumptions are met.
In particular the following threat-vectors exist when training using MXNet:
- When running distributed training using MXNet there is no built-in support for authenticating cluster nodes participating in the training job.
- Data exchange between cluster nodes happens is in plain-text.
kvstore.set_optimizerone can use a custom optimizer to combine gradients. This optimizer code is sent to the server nodes as a pickle file. A server does not perform any further validation of the pickle file and simply executes the code trusting the sender (worker).
- Since there is no authentication between nodes, a malicious actor running on the same network can launch a Denial of Service (DoS) attack by sending data that can overwhelm/crash a scheduler or other server nodes.
It is highly recommended that the following best practices be followed when using MXNet:
- Run MXNet with least privilege, i.e. not as root.
- Run MXNet training jobs inside a secure and isolated environment. If you are using a cloud provider like Amazon AWS, running your training job inside a private VPC is a good way to accomplish this. Additionally, configure your network security settings so as to only allow connections that the cluster nodes require.
- Make sure no unauthorized actors have physical or remote access to the nodes participating in MXNet training.
- During training, one can configure MXNet to periodically save model checkpoints. To protect these model checkpoints from unauthorized access, make sure the checkpoints are written out to an encrypted storage volume, and have a provision to delete checkpoints that are no longer needed.
- When sharing trained models, or when receiving trained models from other parties, ensure that model artifacts are authenticated and integrity protected using cryptographic signatures, thus ensuring that the data received comes from trusted sources and has not been maliciously (or accidentally) modified in transit.
- By default, mx.random uses a static and fixed seed value. The random utilities in MXNet should therefore never be used to implement any type of security critical functionality where cryptographically secure pseudorandom number generation is required.
The following are not MXNet framework specific threats but are applicable to Machine Learning models in general.
- When deploying high-value, proprietary models for inference, care should be taken to prevent an adversary from stealing the model. The research paper Stealing Machine Learning Models via Prediction APIs outlines experiments performed to show how an attacker can use a prediction API to leak the ML model or construct a nearly identical replica. A simple way to thwart such an attack is to not expose the prediction probabilities to a high degree of precision in the API response.