View source on GitHub
|
Calculates how often predictions matches labels.
tf.keras.metrics.Accuracy(
name='accuracy', dtype=None
)
For example, if y_true is [1, 2, 3, 4] and y_pred is [0, 2, 3, 4]
then the accuracy is 3/4 or .75. If the weights were specified as
[1, 1, 0, 0] then the accuracy would be 1/2 or .5.
This metric creates two local variables, total and count that are used to
compute the frequency with which y_pred matches y_true. This frequency is
ultimately returned as binary accuracy: an idempotent operation that simply
divides total by count.
If sample_weight is None, weights default to 1.
Use sample_weight of 0 to mask values.
>>> m = tf.keras.metrics.Accuracy()
>>> _ = m.update_state([1, 2, 3, 4], [0, 2, 3, 4])
>>> m.result().numpy()
0.75
>>> m.reset_states()
>>> _ = m.update_state([1, 2, 3, 4], [0, 2, 3, 4], sample_weight=[1, 1, 0, 0])
>>> m.result().numpy()
0.5
Usage with tf.keras API:
model = tf.keras.Model(inputs, outputs)
model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.Accuracy()])
fn: The metric function to wrap, with signature
fn(y_true, y_pred, **kwargs).name: (Optional) string name of the metric instance.dtype: (Optional) data type of the metric result.**kwargs: The keyword arguments that are passed on to fn.reset_statesreset_states()
Resets all of the metric state variables.
This function is called between epochs/steps, when a metric is evaluated during training.
resultresult()
Computes and returns the metric value tensor.
Result computation is an idempotent operation that simply calculates the metric value using the state variables.
update_stateupdate_state(
y_true, y_pred, sample_weight=None
)
Accumulates metric statistics.
y_true and y_pred should have the same shape.
y_true: The ground truth values.y_pred: The predicted values.sample_weight: Optional weighting of each example. Defaults to 1. Can be
a Tensor whose rank is either 0, or the same rank as y_true,
and must be broadcastable to y_true.Update op.