tf.keras.losses.SparseCategoricalCrossentropy

View source on GitHub

Computes the crossentropy loss between the labels and predictions.

tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=False, reduction=losses_utils.ReductionV2.AUTO,
    name='sparse_categorical_crossentropy'
)

Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided as integers. If you want to provide labels using one-hot representation, please use CategoricalCrossentropy loss. There should be # classes floating point values per feature for y_pred and a single floating point value per feature for y_true.

In the snippet below, there is a single floating point value per example for y_true and # classes floating pointing values per example for y_pred. The shape of y_true is [batch_size] and the shape of y_pred is [batch_size, num_classes].

Usage:

cce = tf.keras.losses.SparseCategoricalCrossentropy()
loss = cce(
  tf.convert_to_tensor([0, 1, 2]),
  tf.convert_to_tensor([[.9, .05, .05], [.5, .89, .6], [.05, .01, .94]]))
print('Loss: ', loss.numpy())  # Loss: 0.3239

Usage with the compile API:

model = tf.keras.Model(inputs, outputs)
model.compile('sgd', loss=tf.keras.losses.SparseCategoricalCrossentropy())

Args:

Methods

__call__

View source

__call__(
    y_true, y_pred, sample_weight=None
)

Invokes the Loss instance.

Args:

Returns:

Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.)

Raises:

from_config

View source

@classmethod
from_config(
    config
)

Instantiates a Loss from its config (output of get_config()).

Args:

Returns:

A Loss instance.

get_config

View source

get_config()