View source on GitHub |
Computes best recall where precision is >= specified value.
Inherits From: Metric
tfma.metrics.RecallAtPrecision(
precision: float,
num_thresholds: Optional[int] = None,
class_id: Optional[int] = None,
name: Optional[str] = None,
top_k: Optional[int] = None
)
For a given score-label-distribution the required precision might not be achievable, in this case 0.0 is returned as recall.
This metric creates three local variables, true_positives
, false_positives
and false_negatives
that are used to compute the recall at the given
precision. The threshold for the given precision value is computed and used to
evaluate the corresponding recall.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Methods
computations
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
@classmethod
from_config( config: Dict[str, Any] ) -> 'Metric'
get_config
get_config() -> Dict[str, Any]
Returns serializable config.