View source on GitHub |
Basic LSTM recurrent network cell with pruning.
Inherits From: BasicLSTMCell
tf.contrib.model_pruning.MaskedBasicLSTMCell(
num_units, forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None,
name=None
)
Overrides the call method of tensorflow BasicLSTMCell and injects the weight masks
The implementation is based on: http://arxiv.org/abs/1409.2329
We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.
It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline.
For advanced models, please use the full tf.compat.v1.nn.rnn_cell.LSTMCell
that follows.
Args | |
---|---|
num_units
|
int, The number of units in the LSTM cell. |
forget_bias
|
float, The bias added to forget gates (see above).
Must set to 0.0 manually when restoring from CudnnLSTM-trained
checkpoints.
|
state_is_tuple
|
If True, accepted and returned states are 2-tuples of
the c_state and m_state . If False, they are concatenated
along the column axis. The latter behavior will soon be deprecated.
|
activation
|
Activation function of the inner states. Default: tanh .
|
reuse
|
(optional) Python boolean describing whether to reuse variables
in an existing scope. If not True , and the existing scope already has
the given variables, an error is raised.
|
name
|
String, the name of the layer. Layers with the same name will
share weights, but to avoid mistakes we require reuse=True in such
cases.
When restoring from CudnnLSTM-trained checkpoints, must use CudnnCompatibleLSTMCell instead. |
Attributes | |
---|---|
graph
|
DEPRECATED FUNCTION |
output_size
|
Integer or TensorShape: size of outputs produced by this cell. |
scope_name
|
|
state_size
|
size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. |
Methods
get_initial_state
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
zero_state
zero_state(
batch_size, dtype
)
Return zero-filled state tensor(s).
Args | |
---|---|
batch_size
|
int, float, or unit Tensor representing the batch size. |
dtype
|
the data type to use for the state. |
Returns | |
---|---|
If state_size is an int or TensorShape, then the return value is a
N-D tensor of shape [batch_size, state_size] filled with zeros.
If |