tf.keras.losses.Loss

View source on GitHub

Loss base class.

tf.keras.losses.Loss(
    reduction=losses_utils.ReductionV2.AUTO, name=None
)

To be implemented by subclasses: * call(): Contains the logic for loss calculation using y_true, y_pred.

Example subclass implementation: class MeanSquaredError(Loss): def call(self, y_true, y_pred): y_pred = ops.convert_to_tensor(y_pred) y_true = math_ops.cast(y_true, y_pred.dtype) return K.mean(math_ops.square(y_pred - y_true), axis=-1)

When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, please use 'SUM' or 'NONE' reduction types, and reduce losses explicitly in your training loop. Using 'AUTO' or 'SUM_OVER_BATCH_SIZE' will raise an error.

Please see https://www.tensorflow.org/tutorials/distribute/custom_training for more details on this.

You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like: with strategy.scope(): loss_obj = tf.keras.losses.CategoricalCrossentropy( reduction=tf.keras.losses.Reduction.NONE) .... loss = (tf.reduce_sum(loss_obj(labels, predictions)) * (1. / global_batch_size))

Args:

Methods

__call__

View source

__call__(
    y_true, y_pred, sample_weight=None
)

Invokes the Loss instance.

Args:

Returns:

Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.)

Raises:

call

View source

call(
    y_true, y_pred
)

Invokes the Loss instance.

Args:

from_config

View source

@classmethod
from_config(
    config
)

Instantiates a Loss from its config (output of get_config()).

Args:

Returns:

A Loss instance.

get_config

View source

get_config()