Computes the precision metric for the given ground truth and predictions.
tfg.nn.metric.precision.evaluate(
ground_truth: type_alias.TensorLike,
prediction: type_alias.TensorLike,
classes: Optional[Union[int, List[int], Tuple[int]]] = None,
reduce_average: bool = True,
prediction_to_category_function: Callable[..., Any] = _cast_to_int,
name: str = 'precision_evaluate'
) -> tf.Tensor
Note |
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
|
Args |
ground_truth
|
A tensor of shape [A1, ..., An, N] , where the last axis
represents the ground truth labels. Will be cast to int32.
|
prediction
|
A tensor of shape [A1, ..., An, N] , where the last axis
represents the predictions (which can be continuous).
|
classes
|
An integer or a list/tuple of integers representing the classes for
which the precision will be evaluated. In case 'classes' is 'None', the
number of classes will be inferred from the given labels and the precision
will be calculated for each of the classes. Defaults to 'None'.
|
reduce_average
|
Whether to calculate the average of the precision for each
class and return a single precision value. Defaults to true.
|
prediction_to_category_function
|
A function to associate a prediction to a
category. Defaults to rounding down the value of the prediction to the
nearest integer value.
|
name
|
A name for this op. Defaults to "precision_evaluate".
|
Returns |
A tensor of shape [A1, ..., An, C] , where the last axis represents the
precision calculated for each of the requested classes.
|
Raises |
ValueError
|
if the shape of ground_truth , prediction is not supported.
|