The LocallyConnected1D layer works similarly to
the Conv1D layer, except that weights are unshared,
that is, a different set of filters is applied at each different patch
of the input.
Example:
# apply a unshared weight convolution 1d of length 3 to a sequence with
# 10 timesteps, with 64 output filters
model = Sequential()
model.add(LocallyConnected1D(64, 3, input_shape=(10, 32)))
# now model.output_shape == (None, 8, 64)
# add a new conv1d on top
model.add(LocallyConnected1D(32, 3))
# now model.output_shape == (None, 6, 32)
Args
filters
Integer, the dimensionality of the output space (i.e. the
number of output filters in the convolution).
kernel_size
An integer or tuple/list of a single integer, specifying
the length of the 1D convolution window.
strides
An integer or tuple/list of a single integer, specifying the
stride length of the convolution.
padding
Currently only supports "valid" (case-insensitive). "same"
may be supported in the future. "valid" means no padding.
data_format
A string, one of channels_last (default) or
channels_first. The ordering of the dimensions in the inputs.
channels_last corresponds to inputs with shape (batch, length,
channels) while channels_first corresponds to inputs with shape
(batch, channels, length). When unspecified, uses
image_data_format value found in your Keras config file at
~/.keras/keras.json (if exists) else 'channels_last'.
Defaults to 'channels_last'.
activation
Activation function to use. If you don't specify anything,
no activation is applied (ie. "linear" activation: a(x) = x).
use_bias
Boolean, whether the layer uses a bias vector.
kernel_initializer
Initializer for the kernel weights matrix.
bias_initializer
Initializer for the bias vector.
kernel_regularizer
Regularizer function applied to the kernel weights
matrix.
bias_regularizer
Regularizer function applied to the bias vector.
activity_regularizer
Regularizer function applied to the output of the
layer (its "activation")..
kernel_constraint
Constraint function applied to the kernel matrix.
bias_constraint
Constraint function applied to the bias vector.
implementation
implementation mode, either 1, 2, or 3. 1 loops
over input spatial locations to perform the forward pass. It is
memory-efficient but performs a lot of (small) ops. 2 stores layer
weights in a dense but sparsely-populated 2D matrix and implements the
forward pass as a single matrix-multiply. It uses a lot of RAM but
performs few (large) ops. 3 stores layer weights in a sparse tensor
and implements the forward pass as a single sparse matrix-multiply.
How to choose:
1: large, dense models,
2: small models,
3: large, sparse models, where "large" stands for large
input/output activations (i.e. many filters, input_filters,
large input_size, output_size), and "sparse" stands for few
connections between inputs and outputs, i.e. small ratio
filters * input_filters * kernel_size / (input_size * strides),
where inputs to and outputs of the layer are assumed to have
shapes (input_size, input_filters), (output_size, filters)
respectively. It is recommended to benchmark each in the setting
of interest to pick the most efficient one (in terms of speed and
memory usage). Correct choice of implementation can lead to
dramatic speed improvements (e.g. 50X), potentially at the expense
of RAM. Also, only padding="valid" is supported by
implementation=1.
Input shape
3D tensor with shape: (batch_size, steps, input_dim)
Output shape
3D tensor with shape: (batch_size, new_steps, filters)steps value
might have changed due to padding or strides.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-01-23 UTC."],[],[]]