View source on GitHub |
Calculates how often predictions match one-hot labels.
Inherits From: MeanMetricWrapper
, Mean
, Metric
, Layer
, Module
tf.keras.metrics.CategoricalAccuracy(
name='categorical_accuracy', dtype=None
)
You can provide logits of classes as y_pred
, since argmax of
logits and probabilities are same.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as categorical accuracy
: an idempotent
operation that simply divides total
by count
.
y_pred
and y_true
should be passed in as vectors of probabilities,
rather than as labels. If necessary, use tf.one_hot
to expand y_true
as
a vector.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Args | |
---|---|
name
|
(Optional) string name of the metric instance. |
dtype
|
(Optional) data type of the metric result. |
Standalone usage:
m = tf.keras.metrics.CategoricalAccuracy()
m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
[0.05, 0.95, 0]])
m.result().numpy()
0.5
m.reset_state()
m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
[0.05, 0.95, 0]],
sample_weight=[0.7, 0.3])
m.result().numpy()
0.3
Usage with compile()
API:
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.CategoricalAccuracy()])
Methods
merge_state
merge_state(
metrics
)
Merges the state from one or more metrics.
This method can be used by distributed systems to merge the state computed by different metric instances. Typically the state will be stored in the form of the metric's weights. For example, a tf.keras.metrics.Mean metric contains a list of two weight values: a total and a count. If there were two instances of a tf.keras.metrics.Accuracy that each independently aggregated partial state for an overall accuracy calculation, these two metric's states could be combined as follows:
m1 = tf.keras.metrics.Accuracy()
_ = m1.update_state([[1], [2]], [[0], [2]])
m2 = tf.keras.metrics.Accuracy()
_ = m2.update_state([[3], [4]], [[3], [4]])
m2.merge_state([m1])
m2.result().numpy()
0.75
Args | |
---|---|
metrics
|
an iterable of metrics. The metrics must have compatible state. |
Raises | |
---|---|
ValueError
|
If the provided iterable does not contain metrics matching the metric's required specifications. |
reset_state
reset_state()
Resets all of the metric state variables.
This function is called between epochs/steps, when a metric is evaluated during training.
result
result()
Computes and returns the scalar metric value tensor or a dict of scalars.
Result computation is an idempotent operation that simply calculates the metric value using the state variables.
Returns | |
---|---|
A scalar tensor, or a dictionary of scalar tensors. |
update_state
update_state(
y_true, y_pred, sample_weight=None
)
Accumulates metric statistics.
For sparse categorical metrics, the shapes of y_true
and y_pred
are
different.
Args | |
---|---|
y_true
|
Ground truth label values. shape = [batch_size, d0, .. dN-1] or
shape = [batch_size, d0, .. dN-1, 1] .
|
y_pred
|
The predicted probability values. shape = [batch_size, d0, .. dN] .
|
sample_weight
|
Optional sample_weight acts as a
coefficient for the metric. If a scalar is provided, then the metric is
simply scaled by the given value. If sample_weight is a tensor of size
[batch_size] , then the metric for each sample of the batch is rescaled
by the corresponding element in the sample_weight vector. If the shape
of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted
to this shape), then each metric element of y_pred is scaled by the
corresponding value of sample_weight . (Note on dN-1 : all metric
functions reduce by 1 dimension, usually the last axis (-1)).
|
Returns | |
---|---|
Update op. |