This is an extremely efficient LSTM implementation, that uses a single TF op
for the entire LSTM. It should be both faster and more memory-efficient than
LSTMBlockCell defined above.
We add forget_bias (default: 1) to the biases of the forget gate in order to
reduce the scale of forgetting in the beginning of the training.
The variable naming is consistent with rnn_cell_impl.LSTMCell.
Args
num_units
int, The number of units in the LSTM cell.
forget_bias
float, The bias added to forget gates (see above).
cell_clip
clip the cell to this value. Defaults is no cell clipping.
use_peephole
Whether to use peephole connections or not.
reuse
(optional) boolean describing whether to reuse variables in an
existing scope. If not True, and the existing scope already has the
given variables, an error is raised.
dtype
the dtype of variables of this layer.
name
String, the name of the layer. Layers with the same name will
share weights, but to avoid mistakes we require reuse=True in such
cases. By default this is "lstm_cell", for variable-name compatibility
with tf.compat.v1.nn.rnn_cell.LSTMCell.