View source on GitHub |
Shared model used during extraction and evaluation.
tfma.types.EvalSharedModel(
model_path: Optional[str] = None,
add_metrics_callbacks: Optional[List[AddMetricsCallbackType]] = None,
include_default_metrics: Optional[bool] = True,
example_weight_key: Optional[Union[str, Dict[str, str]]] = None,
additional_fetches: Optional[List[str]] = None,
model_loader: Optional[tfma.types.ModelLoader
] = None,
model_name: str = '',
model_type: str = '',
rubber_stamp: bool = False,
is_baseline: bool = False,
resource_hints: Optional[Dict[str, Any]] = None,
backend_config: Optional[Any] = None,
construct_fn: Optional[Callable[[], Any]] = None
)
Used in the notebooks
Used in the tutorials |
---|
More details on add_metrics_callbacks:
Each add_metrics_callback should have the following prototype: def add_metrics_callback(features_dict, predictions_dict, labels_dict):
Note that features_dict, predictions_dict and labels_dict are not necessarily dictionaries - they might also be Tensors, depending on what the model's eval_input_receiver_fn returns.
It should create and return a metric_ops dictionary, such that metric_ops['metric_name'] = (value_op, update_op), just as in the Trainer.
Short example:
def add_metrics_callback(features_dict, predictions_dict, labels): metrics_ops = {} metric_ops['mean_label'] = tf.metrics.mean(labels) metric_ops['mean_probability'] = tf.metrics.mean(tf.slice( predictions_dict['probabilities'], [0, 1], [2, 1])) return metric_ops