# Evaluation Metrics

Evaluation metrics provide a way to evaluate the performance of a learned model. This is typically used during training to monitor performance on the validation set.

#
** MXNet.mx.ACE** —

*Type*.

```
ACE
```

Calculates the averaged cross-entropy (logloss) for classification.

**Arguments:**

`eps::Float64`

: Prevents returning`Inf`

if`p = 0`

.

#
** MXNet.mx.AbstractEvalMetric** —

*Type*.

```
AbstractEvalMetric
```

The base class for all evaluation metrics. The sub-types should implement the following interfaces:

#
** MXNet.mx.Accuracy** —

*Type*.

```
Accuracy
```

Multiclass classification accuracy.

Calculates the mean accuracy per sample for softmax in one dimension. For a multi-dimensional softmax the mean accuracy over all dimensions is calculated.

#
** MXNet.mx.MSE** —

*Type*.

```
MSE
```

Mean Squared Error.

Calculates the mean squared error regression loss. Requires that label and prediction have the same shape.

#
** MXNet.mx.MultiACE** —

*Type*.

```
MultiACE
```

Calculates the averaged cross-entropy per class and overall (see `ACE`

). This can be used to quantify the influence of different classes on the overall loss.

#
** MXNet.mx.MultiMetric** —

*Type*.

```
MultiMetric(metrics::Vector{AbstractEvalMetric})
```

Combine multiple metrics in one and get a result for all of them.

**Usage**

To calculate both mean-squared error `Accuracy`

and log-loss `ACE`

:

```
mx.fit(..., eval_metric = mx.MultiMetric([mx.Accuracy(), mx.ACE()]))
```

#
** MXNet.mx.NMSE** —

*Type*.

```
NMSE
```

Normalized Mean Squared Error

Note that there are various ways to do the *normalization*. It depends on your own context. Please judge the problem setting you have first. If the current implementation do not suitable for you, feel free to file it on GitHub.

Let me show you a use case of this kind of normalization:

Bob is training a network for option pricing. The option pricing problem is a regression problem (pirce predicting). There are lots of option contracts on same target stock but different strike price. For example, there is a stock `S`

; it's market price is 1000. And, there are two call option contracts with different strike price. Assume Bob obtains the outcome as following table:

```
+--------+----------------+----------------+--------------+
| | Strike Price | Market Price | Pred Price |
+--------+----------------+----------------+--------------+
| Op 1 | 1500 | 100 | 80 |
+--------+----------------+----------------+--------------+
| Op 2 | 500 | 10 | 8 |
+--------+----------------+----------------+--------------+
```

Now, obviously, Bob will calculate the normalized MSE as:

Both of the pred prices got the same degree of error.

For more discussion about normalized MSE, please see #211 also.

#
** MXNet.mx.SeqMetric** —

*Type*.

```
SeqMetric(metrics::Vector{AbstractEvalMetric})
```

Apply a different metric to each output. This is especially useful for `mx.Group`

.

**Usage**

Calculate accuracy `Accuracy`

for the first output and log-loss `ACE`

for the second output:

```
mx.fit(..., eval_metric = mx.SeqMetric([mx.Accuracy(), mx.ACE()]))
```

#
** MXNet.mx.update!** —

*Method*.

```
update!(metric, labels, preds)
```

Update and accumulate metrics.

**Arguments:**

`metric::AbstractEvalMetric`

: the metric object.`labels::Vector{NDArray}`

: the labels from the data provider.`preds::Vector{NDArray}`

: the outputs (predictions) of the network.

#
** MXNet.mx.NullMetric** —

*Type*.

```
NullMetric()
```

A metric that calculates nothing. Can be used to ignore an output during training.

#
** Base.get** —

*Method*.

```
get(metric)
```

Get the accumulated metrics.

Returns `Vector{Tuple{Base.Symbol, Real}}`

, a list of name-value pairs. For example, `[(:accuracy, 0.9)]`

.

#
** MXNet.mx.hasNDArraySupport** —

*Method*.

```
hasNDArraySupport(metric) -> Val{true/false}
```

Trait for `_update_single_output`

should return `Val{true}() if metric can handle`

NDArray`directly and`

Val{false}()`if requires`

Array`. Metric that work with NDArrays can be async, while native Julia arrays require that we copy the output of the network, which is a blocking operation.

#
** MXNet.mx.reset!** —

*Method*.

```
reset!(metric)
```

Reset the accumulation counter.