tf.keras.metrics.SparseCategoricalAccuracy

View source on GitHub

Calculates how often predictions matches integer labels.

tf.keras.metrics.SparseCategoricalAccuracy(
    name='sparse_categorical_accuracy', dtype=None
)

For example, if y_true is [[2], [1]] and y_pred is [[0.1, 0.9, 0.8], [0.05, 0.95, 0]] then the categorical accuracy is 1/2 or .5. If the weights were specified as [0.7, 0.3] then the categorical accuracy would be .3. You can provide logits of classes as y_pred, since argmax of logits and probabilities are same.

This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as sparse categorical accuracy: an idempotent operation that simply divides total by count.

If sample_weight is None, weights default to 1. Use sample_weight of 0 to mask values.

Usage:

>>> m = tf.keras.metrics.SparseCategoricalAccuracy()
>>> _ = m.update_state([[2], [1]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5

Usage with tf.keras API:

model = tf.keras.Model(inputs, outputs)
model.compile(
    'sgd',
    loss='mse',
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])

Args:

Methods

reset_states

View source

reset_states()

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

result

View source

result()

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

update_state

View source

update_state(
    y_true, y_pred, sample_weight=None
)

Accumulates metric statistics.

y_true and y_pred should have the same shape.

Args:

Returns:

Update op.