Runs dynamic decoding with a decoder.
tfa.seq2seq.dynamic_decode(
decoder: Union[tfa.seq2seq.Decoder
, tfa.seq2seq.BaseDecoder
],
output_time_major: bool = False,
impute_finished: bool = False,
maximum_iterations: Optional[TensorLike] = None,
parallel_iterations: int = 32,
swap_memory: bool = False,
training: Optional[bool] = None,
scope: Optional[str] = None,
enable_tflite_convertible: bool = False,
**kwargs
) -> Tuple[Any, Any, Any]
Calls initialize()
once and step()
repeatedly on the decoder object.
Args |
decoder
|
A tfa.seq2seq.Decoder or tfa.seq2seq.BaseDecoder instance.
|
output_time_major
|
Python boolean. Default: False (batch major). If
True , outputs are returned as time major tensors (this mode is
faster). Otherwise, outputs are returned as batch major tensors (this
adds extra time to the computation).
|
impute_finished
|
Python boolean. If True , then states for batch
entries which are marked as finished get copied through and the
corresponding outputs get zeroed out. This causes some slowdown at
each time step, but ensures that the final state and outputs have
the correct values and that backprop ignores time steps that were
marked as finished.
|
maximum_iterations
|
A strictly positive int32 scalar, the maximum
allowed number of decoding steps. Default is None (decode until the
decoder is fully done).
|
parallel_iterations
|
Argument passed to tf.while_loop .
|
swap_memory
|
Argument passed to tf.while_loop .
|
training
|
Python boolean. Indicates whether the layer should behave
in training mode or in inference mode. Only relevant
when dropout or recurrent_dropout is used.
|
scope
|
Optional name scope to use.
|
enable_tflite_convertible
|
Python boolean. If True , then the variables
of TensorArray become of 1-D static shape. Also zero pads in the
output tensor will be discarded. Default: False .
|
**kwargs
|
dict, other keyword arguments for dynamic_decode. It might
contain arguments for BaseDecoder to initialize, which takes all
tensor inputs during call().
|
Returns |
(final_outputs, final_state, final_sequence_lengths) .
|
Raises |
ValueError
|
if maximum_iterations is provided but is not a scalar.
|