Converts the quantized input
tensor into a lower-precision output
.
View aliases
Compat aliases for migration
See Migration guide for more details.
tf.raw_ops.Requantize(
input,
input_min,
input_max,
requested_output_min,
requested_output_max,
out_type,
name=None
)
Converts the quantized input
tensor into a lower-precision output
, using the
output range specified with requested_output_min
and requested_output_max
.
[input_min, input_max]
are scalar floats that specify the range for the float
interpretation of the input
data. For example, if input_min
is -1.0f and
input_max
is 1.0f, and we are dealing with quint16
quantized data, then a 0
value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.
Args | |
---|---|
input
|
A Tensor . Must be one of the following types: qint8 , quint8 , qint32 , qint16 , quint16 .
|
input_
|
A Tensor of type float32 .
The float value that the minimum quantized input value represents.
|
input_
|
A Tensor of type float32 .
The float value that the maximum quantized input value represents.
|
requested_
|
A Tensor of type float32 .
The float value that the minimum quantized output value represents.
|
requested_
|
A Tensor of type float32 .
The float value that the maximum quantized output value represents.
|
out_
|
A tf.DType from: tf. .
The type of the output. Should be a lower bit depth than Tinput.
|
name
|
A name for the operation (optional). |