View source on GitHub |
NDCG (normalized discounted cumulative gain) metric.
Inherits From: Metric
tfma.metrics.NDCG(
gain_key: str,
top_k_list: Optional[List[int]] = None,
name: str = NDCG_NAME
)
Calculates NDCG@k for a given set of top_k values calculated from a list of gains (relevance scores) that are sorted based on the associated predictions. The top_k_list can be passed as part of the NDCG metric config or using tfma.MetricsSpec.binarize.top_k_list if configuring multiple top_k metrics. The gain (relevance score) is determined from the value stored in the 'gain_key' feature. The value of NDCG@k returned is a weighted average of NDCG@k over the set of queries using the example weights.
NDCG@k = (DCG@k for the given rank)/(DCG@k DCG@k = sum_{i=1}^k gain_i/log_2(i+1), where gain_i is the gain (relevance score) of the i^th ranked response, indexed from 1.
This is a query/ranking based metric so a query_key must also be provided in the associated tfma.MetricsSpec.
Methods
computations
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
@classmethod
from_config( config: Dict[str, Any] ) -> 'Metric'
get_config
get_config() -> Dict[str, Any]
Returns serializable config.