View source on GitHub |
Calibrates inputs
using derived parameters (kernels).
@tf.function
tfl.conditional_pwl_calibration.pwl_calibration_fn( inputs: tf.Tensor, keypoint_input_parameters: Optional[tf.Tensor], keypoint_output_parameters: tf.Tensor, keypoint_input_min: float = 0.0, keypoint_input_max: float = 1.0, keypoint_output_min: float = 0.0, keypoint_output_max: float = 1.0, units: int = 1, monotonicity: str = 'none', clamp_min: bool = False, clamp_max: bool = False, is_cyclic: bool = False, missing_input_value: Optional[float] = None, missing_output_value: Optional[float] = None, return_derived_parameters: bool = False ) -> Union[tf.Tensor, Tuple[tf.Tensor, tf.Tensor, tf.Tensor]]
pwl_calibration_fn
is similar to tfl.layers.PWLCalibration
with the key
difference that the keypoints are decided by the given parameters instead
of learnable weights belonging to a layer. These parameters can be one of:
- constants,
- trainable variables,
- outputs from other TF modules.
Shapes:
The last dimension of keypoint_input_parameters
(input_param_size
) and
keypoint_output_parameters
(output_param_size
) depend on the number of
keypoints used by the calibrator. We follow the relationships that
input_param_size = # keypoints - 2
, as the leftmost and rightmost keypoints are given.output_param_size = # keypoints
initially, and we then modify it by- if cyclic calibrator:
output_param_size -= 1
, - if clamp_min:
output_param_size -= 1
, - if clamp_max:
output_param_size -= 1
, - if need to learn how to impute missing:
output_param_size += 1
.
- if cyclic calibrator:
The final shapes need to be broadcast friendly with (batch_size, units, 1)
:
keypoint_input_parameters
:(1 or batch_size, 1 or units, input_param_size)
.keypoint_output_parameters
:(1 or batch_size, 1 or units, output_param_size)
.
Input shape | |
---|---|
inputs should be one of:
|
Returns | |
---|---|
If return_derived_parameters = False :
If
|