# Evaluation Metrics

Evaluation metrics provide a way to evaluate the performance of a learned model. This is typically used during training to monitor performance on the validation set.

# MXNet.mx.ACEType.

ACE


Calculates the averaged cross-entropy (logloss) for classification.

Arguments:

• eps::Float64: Prevents returning Inf if p = 0.

# MXNet.mx.AbstractEvalMetricType.

AbstractEvalMetric


The base class for all evaluation metrics. The sub-types should implement the following interfaces:

# MXNet.mx.AccuracyType.

Accuracy


Multiclass classification accuracy.

Calculates the mean accuracy per sample for softmax in one dimension. For a multi-dimensional softmax the mean accuracy over all dimensions is calculated.

# MXNet.mx.MSEType.

MSE


Mean Squared Error.

Calculates the mean squared error regression loss. Requires that label and prediction have the same shape.

# MXNet.mx.MultiACEType.

MultiACE


Calculates the averaged cross-entropy per class and overall (see ACE). This can be used to quantify the influence of different classes on the overall loss.

# MXNet.mx.MultiMetricType.

MultiMetric(metrics::Vector{AbstractEvalMetric})


Combine multiple metrics in one and get a result for all of them.

Usage

To calculate both mean-squared error Accuracy and log-loss ACE:

  mx.fit(..., eval_metric = mx.MultiMetric([mx.Accuracy(), mx.ACE()]))


# MXNet.mx.NMSEType.

NMSE


Normalized Mean Squared Error

Note that there are various ways to do the normalization. It depends on your own context. Please judge the problem setting you have first. If the current implementation do not suitable for you, feel free to file it on GitHub.

Let me show you a use case of this kind of normalization:

Bob is training a network for option pricing. The option pricing problem is a regression problem (pirce predicting). There are lots of option contracts on same target stock but different strike price. For example, there is a stock S; it's market price is 1000. And, there are two call option contracts with different strike price. Assume Bob obtains the outcome as following table:

+--------+----------------+----------------+--------------+
|        | Strike Price   | Market Price   | Pred Price   |
+--------+----------------+----------------+--------------+
| Op 1   | 1500           |  100           | 80           |
+--------+----------------+----------------+--------------+
| Op 2   | 500            |  10            | 8            |
+--------+----------------+----------------+--------------+


Now, obviously, Bob will calculate the normalized MSE as:

Both of the pred prices got the same degree of error.

# MXNet.mx.SeqMetricType.

SeqMetric(metrics::Vector{AbstractEvalMetric})


Apply a different metric to each output. This is especially useful for mx.Group.

Usage

Calculate accuracy Accuracy for the first output and log-loss ACE for the second output:

  mx.fit(..., eval_metric = mx.SeqMetric([mx.Accuracy(), mx.ACE()]))


# MXNet.mx.update!Method.

update!(metric, labels, preds)


Update and accumulate metrics.

Arguments:

• metric::AbstractEvalMetric: the metric object.
• labels::Vector{NDArray}: the labels from the data provider.
• preds::Vector{NDArray}: the outputs (predictions) of the network.

# MXNet.mx.NullMetricType.

NullMetric()


A metric that calculates nothing. Can be used to ignore an output during training.

# Base.getMethod.

get(metric)


Get the accumulated metrics.

Returns Vector{Tuple{Base.Symbol, Real}}, a list of name-value pairs. For example, [(:accuracy, 0.9)].

# MXNet.mx.hasNDArraySupportMethod.

hasNDArraySupport(metric) -> Val{true/false}


Trait for _update_single_output should return Val{true}() if metric can handleNDArraydirectly andVal{false}()if requiresArray. Metric that work with NDArrays can be async, while native Julia arrays require that we copy the output of the network, which is a blocking operation.

# MXNet.mx.reset!Method.

reset!(metric)
`

Reset the accumulation counter.