Based on the Gluon API specification, the new Gluon library in Apache MXNet provides a clear, concise, and simple API for deep learning. It makes it easy to prototype, build, and train deep learning models without sacrificing training speed. Install the latest version of MXNet to get access to Gluon by either following these easy steps or using this simple command:

pip install mxnet --pre --user

  1. Simple, Easy-to-Understand Code: Gluon offers a full set of plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers.
  2. Flexible, Imperative Structure: Gluon does not require the neural network model to be rigidly defined, but rather brings the training algorithm and model closer together to provide flexibility in the development process.
  3. Dynamic Graphs: Gluon enables developers to define neural network models that are dynamic, meaning they can be built on the fly, with any structure, and using any of Python’s native control flow.
  4. High Performance: Gluon provides all of the above benefits without impacting the training speed that the underlying engine provides.

The Straight Dope

The community is also working on parallel effort to create a foundational resource for learning about machine learning. The Straight Dope is a book composed of introductory as well as advanced tutorials – all based on the Gluon interface.

For example,

Code Examples

Simple, Easy-to-Understand Code

Use plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers:

net = gluon.nn.Sequential()
# When instantiated, Sequential stores a chain of neural network layers. 
# Once presented with data, Sequential executes each layer in turn, using 
# the output of one layer as the input for the next
with net.name_scope():
    net.add(gluon.nn.Dense(256, activation="relu")) # 1st layer (256 nodes)
    net.add(gluon.nn.Dense(256, activation="relu")) # 2nd hidden layer

Flexible, Imperative Structure:

Prototype, build, and train neural networks in fully imperative manner using the MXNet autograd package and the Gluon trainer method:

epochs = 10

        for e in range(epochs):
            for i, (data, label) in enumerate(train_data):
                with autograd.record():
                    output = net(data) # the forward iteration
                    loss = softmax_cross_entropy(output, label)

Dynamic Graphs:

Build neural networks on the fly for use cases where neural networks must change in size and shape during model training:

def forward(self, F, inputs, tree):
    children_outputs = [self.forward(F, inputs, child)
                        for child in tree.children]
    #Recursively builds the neural network based on each input sentence’s
    #syntactic structure during the model definition and training process

High Performance:

Easily cache the neural network to achieve high performance by defining your neural network with HybridSequential and calling the hybridize method:

net = nn.HybridSequential()
with net.name_scope():
    net.add(nn.Dense(256, activation="relu"))
    net.add(nn.Dense(128, activation="relu"))

Next, to compile and optimize the HybridSequential, we can then call its hybridize method: