View source on GitHub |
Layer which includes a random ensemble of lattices.
tfl.layers.RTL(
num_lattices,
lattice_rank,
lattice_size=2,
output_min=None,
output_max=None,
init_min=None,
init_max=None,
separate_outputs=False,
random_seed=42,
num_projection_iterations=10,
monotonic_at_every_step=True,
clip_inputs=True,
interpolation='hypercube',
parameterization='all_vertices',
num_terms=2,
avoid_intragroup_interaction=True,
kernel_initializer='random_monotonic_initializer',
kernel_regularizer=None,
average_outputs=False,
**kwargs
)
RTL (Random Tiny Lattices) is an ensemble of tfl.layers.Lattice
layers that
takes in a collection of monotonic and unconstrained features and randomly
arranges them into lattices of a given rank. The input is taken as "groups",
and inputs from the same group will not be used in the same lattice. E.g. the
input can be the output of a calibration layer with multiple units applied to
the same input feature. If there are more slots in the RTL than the number of
inputs, inputs will be repeatedly used. Repeats will be approximately uniform
across all inputs.
Input shape:
One of | |
---|---|
|
Output shape:
If separate_outputs == True
, the output will be in the same format as the
input and can be passed to follow on RTL layers:
{'unconstrained': unconstrained_out, 'increasing': mon_out}
where
unconstrained_out
and mon_out
are of (batch_size, num_unconstrained_out)
and (batch_size, num_mon_out) respectively, and
num_unconstrained_out + num_mon_out == num_lattices
. If
separate_outputs == False
the output will be a rank-2 tensor with shape:
(batch_size, num_lattices) if average_outputs is False, or (batch_size, 1) if
True.
Example:
a = keras.Input(shape=(1,))
b = keras.Input(shape=(1,))
c = keras.Input(shape=(1,))
d = keras.Input(shape=(1,))
cal_a = tfl.layers.CategoricalCalibration(
units=10, output_min=0, output_max=1, ...)(a)
cal_b = tfl.layers.PWLCalibration(
units=20, output_min=0, output_max=1, ...)(b)
cal_c = tfl.layers.PWLCalibration(
units=10, output_min=0, output_max=1, monotonicity='increasing', ...)(c)
cal_d = tfl.layers.PWLCalibration(
units=20, output_min=0, output_max=1, monotonicity='decreasing', ...)(d)
rtl_0 = RTL(
num_lattices=20,
lattice_rank=3,
output_min=0,
output_max=1,
separate_outputs=True,
)({
'unconstrained': [cal_a, cal_b],
'increasing': [cal_c, cal_d],
})
rtl_1 = RTL(num_lattices=5, lattice_rank=4)(rtl_0)
outputs = tfl.layers.Linear(
num_input_dims=5,
monotonicities=['increasing'] * 5,
)(rtl_1)
model = keras.Model(inputs=[a, b, c, d], outputs=outputs)
Args | |
---|---|
num_lattices
|
Number of lattices in the ensemble. |
lattice_rank
|
Number of features used in each lattice. |
lattice_size
|
Number of lattice vertices per dimension (minimum is 2). |
output_min
|
None or lower bound of the output. |
output_max
|
None or upper bound of the output. |
init_min
|
None or lower bound of lattice kernel initialization. |
init_max
|
None or upper bound of lattice kernel initialization. |
separate_outputs
|
If set to true, the output will be a dict in the same format as the input to the layer, ready to be passed to another RTL layer. If false, the output will be a single tensor of shape (batch_size, num_lattices). See output shape for details. |
random_seed
|
Random seed for the randomized feature arrangement in the
ensemble. Also used for initialization of lattices using
'kronecker_factored' parameterization.
|
num_projection_iterations
|
Number of iterations of Dykstra projections algorithm. Projection updates will be closer to a true projection (with respect to the L2 norm) with higher number of iterations. Increasing this number has diminishing return on projection precsion. Infinite number of iterations would yield perfect projection. Increasing this number might slightly improve convergence by cost of slightly increasing running time. Most likely you want this number to be proportional to number of lattice vertices in largest constrained dimension. |
monotonic_at_every_step
|
Whether to strictly enforce monotonicity and trust constraints after every gradient update by applying a final imprecise projection. Setting this parameter to True together with small num_projection_iterations parameter is likely to hurt convergence. |
clip_inputs
|
If inputs should be clipped to the input range of the lattice. |
interpolation
|
One of 'hypercube' or 'simplex' interpolation. For a
d-dimensional lattice, 'hypercube' interpolates 2^d parameters, whereas
'simplex' uses d+1 parameters and thus scales better. For details see
tfl.lattice_lib.evaluate_with_simplex_interpolation and
tfl.lattice_lib.evaluate_with_hypercube_interpolation .
|
parameterization
|
The parameterization of the lattice function class to
use. A lattice function is uniquely determined by specifying its value
on every lattice vertex. A parameterization scheme is a mapping from a
vector of parameters to a multidimensional array of lattice vertex
values. It can be one of:
|
num_terms
|
The number of terms in a lattice using 'kronecker_factored'
parameterization. Ignored if parameterization is set to
'all_vertices' .
|
avoid_intragroup_interaction
|
If set to true, the RTL algorithm will try to avoid having inputs from the same group in the same lattice. |
kernel_initializer
|
One of:
'linear_initializer' : initialize parameters to form a linear
function with positive and equal coefficients for monotonic dimensions
and 0.0 coefficients for other dimensions. Linear function is such
that minimum possible output is equal to output_min and maximum
possible output is equal to output_max. See
tfl.lattice_layer.LinearInitializer class docstring for more
details. This initialization is not supported when using the
'kronecker_factored' parameterization.'random_monotonic_initializer' : initialize parameters uniformly at
random such that all parameters are monotonically increasing for each
input. Parameters will be sampled uniformly at random from the range
[init_min, init_max] if specified, otherwise
[output_min, output_max] . See
tfl.lattice_layer.RandomMonotonicInitializer class docstring for
more details. This initialization is not supported when using the
'kronecker_factored' parameterization.'kfl_random_monotonic_initializer' : initialize parameters uniformly
at random such that all parameters are monotonically increasing for
each monotonic input. Parameters will be sampled uniformly at random
from the range [init_min, init_max] if specified. Otherwise, the
initialization range will be algorithmically determined depending on
output_{min/max}. See tfl.layers.KroneckerFactoredLattice and
tfl.kronecker_factored_lattice.KFLRandomMonotonicInitializer class
docstrings for more details. This initialization is not supported when
using 'all_vertices' parameterization.
|
kernel_regularizer
|
None or a single element or a list of following:
('torsion', l1, l2) or List ['torsion', l1, l2] where l1 and
l2 represent corresponding regularization amount for graph Torsion
regularizer. l1 and l2 must be single floats. Lists of floats to
specify different regularization amount for every dimension is not
currently supported.('laplacian', l1, l2) or List ['laplacian', l1, l2] where l1
and l2 represent corresponding regularization amount for graph
Laplacian regularizer. l1 and l2 must be single floats. Lists of
floats to specify different regularization amount for every dimension
is not currently supported.
|
average_outputs
|
Whether to average the outputs of this layer. Ignored when separate_outputs is True. |
**kwargs
|
Other args passed to keras.layers.Layer initializer.
|
Raises | |
---|---|
ValueError
|
If layer hyperparameters are invalid. |
ValueError
|
If parameterization is not one of 'all_vertices' or
'kronecker_factored' .
|
Attributes | |
---|---|
|
|
activity_regularizer
|
Optional regularizer function for the output of this layer. |
compute_dtype
|
The dtype of the layer's computations.
This is equivalent to Layers automatically cast their inputs to the compute dtype, which
causes computations and the output to be in the compute dtype as well.
This is done by the base Layer class in Layers often perform certain internal computations in higher precision
when |
dtype
|
The dtype of the layer weights.
This is equivalent to |
dtype_policy
|
The dtype policy associated with this layer.
This is an instance of a |
dynamic
|
Whether the layer is dynamic (eager-only); set in the constructor. |
input
|
Retrieves the input tensor(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer. |
input_spec
|
InputSpec instance(s) describing the input format for this layer.
When you create a layer subclass, you can set
Now, if you try to call the layer on an input that isn't rank 4
(for instance, an input of shape
Input checks that can be specified via
For more information, see |
losses
|
List of losses added using the add_loss() API.
Variable regularization tensors are created when this property is
accessed, so it is eager safe: accessing
|
metrics
|
List of metrics attached to the layer. |
name
|
Name of the layer (string), set in the constructor. |
name_scope
|
Returns a tf.name_scope instance for this class.
|
non_trainable_weights
|
List of all non-trainable weights tracked by this layer.
Non-trainable weights are not updated during training. They are
expected to be updated manually in |
output
|
Retrieves the output tensor(s) of a layer.
Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer. |
submodules
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
supports_masking
|
Whether this layer supports computing a mask using compute_mask .
|
trainable
|
|
trainable_weights
|
List of all trainable weights tracked by this layer.
Trainable weights are updated via gradient descent during training. |
variable_dtype
|
Alias of Layer.dtype , the dtype of the weights.
|
weights
|
Returns the list of all layer variables/weights. |
Methods
add_loss
add_loss(
losses, **kwargs
)
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be
dependent on the inputs passed when calling a layer. Hence, when reusing
the same layer on different inputs a
and b
, some entries in
layer.losses
may be dependent on a
and some on b
. This method
automatically keeps track of dependencies.
This method can be used inside a subclassed layer or model's call
function, in which case losses
should be a Tensor or list of Tensors.
Example:
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
The same code works in distributed training: the input to add_loss()
is treated like a regularization loss and averaged across replicas
by the training loop (both built-in Model.fit()
and compliant custom
training loops).
The add_loss
method can also be called directly on a Functional Model
during construction. In this case, any loss Tensors passed to this Model
must be symbolic and be able to be traced back to the model's Input
s.
These losses become part of the model's topology and are tracked in
get_config
.
Example:
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
model.add_loss(tf.abs(tf.reduce_mean(x)))
If this is not the case for your loss (if, for example, your loss
references a Variable
of one of the model's layers), you can wrap your
loss in a zero-argument lambda. These losses are not tracked as part of
the model's topology since they can't be serialized.
Example:
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10)
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
Args | |
---|---|
losses
|
Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor. |
**kwargs
|
Used for backwards compatibility only. |
assert_constraints
assert_constraints(
eps=1e-06
)
Asserts that weights satisfy all constraints.
In graph mode builds and returns a list of assertion ops. In eager mode directly executes assertions.
Args | |
---|---|
eps
|
allowed constraints violation. |
Returns | |
---|---|
List of assertion ops in graph mode or immediately asserts in eager mode. |
build
build(
input_shape
)
Standard Keras build() method.
build_from_config
build_from_config(
config
)
Builds the layer's states with the supplied config dict.
By default, this method calls the build(config["input_shape"])
method,
which creates weights based on the layer's input shape in the supplied
config. If your config contains other information needed to load the
layer's state, you should override this method.
Args | |
---|---|
config
|
Dict containing the input shape associated with this layer. |
compute_mask
compute_mask(
inputs, mask=None
)
Computes an output mask tensor.
Args | |
---|---|
inputs
|
Tensor or list of tensors. |
mask
|
Tensor or list of tensors. |
Returns | |
---|---|
None or a tensor (or list of tensors, one per output tensor of the layer). |
compute_output_shape
compute_output_shape(
input_shape
)
Standard Keras compute_output_shape() method.
count_params
count_params()
Count the total number of scalars composing the weights.
Returns | |
---|---|
An integer count. |
Raises | |
---|---|
ValueError
|
if the layer isn't yet built (in which case its weights aren't yet defined). |
finalize_constraints
finalize_constraints()
Ensures layers weights strictly satisfy constraints.
Applies approximate projection to strictly satisfy specified constraints.
If monotonic_at_every_step == True
there is no need to call this function.
Returns | |
---|---|
In eager mode directly updates weights and returns variable which stores
them. In graph mode returns a list of assign_add op which has to be
executed to updates weights.
|
from_config
@classmethod
from_config( config )
Creates a layer from its config.
This method is the reverse of get_config
,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights
).
Args | |
---|---|
config
|
A Python dictionary, typically the output of get_config. |
Returns | |
---|---|
A layer instance. |
get_build_config
get_build_config()
Returns a dictionary with the layer's input shape.
This method returns a config dict that can be used by
build_from_config(config)
to create all states (e.g. Variables and
Lookup tables) needed by the layer.
By default, the config only contains the input shape that the layer was built with. If you're writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when TF-Keras attempts to load its value upon model loading.
Returns | |
---|---|
A dict containing the input shape associated with the layer. |
get_config
get_config()
Standard Keras get_config() method.
get_weights
get_weights()
Returns the current weights of the layer, as NumPy arrays.
The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.
For example, a Dense
layer returns a list of two values: the kernel
matrix and the bias vector. These can be used to set the weights of
another Dense
layer:
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
Returns | |
---|---|
Weights values as a list of NumPy arrays. |
load_own_variables
load_own_variables(
store
)
Loads the state of the layer.
You can override this method to take full control of how the state of
the layer is loaded upon calling keras.models.load_model()
.
Args | |
---|---|
store
|
Dict from which the state of the model will be loaded. |
save_own_variables
save_own_variables(
store
)
Saves the state of the layer.
You can override this method to take full control of how the state of
the layer is saved upon calling model.save()
.
Args | |
---|---|
store
|
Dict where the state of the model will be saved. |
set_weights
set_weights(
weights
)
Sets the weights of the layer, from NumPy arrays.
The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer.
For example, a Dense
layer returns a list of two values: the kernel
matrix and the bias vector. These can be used to set the weights of
another Dense
layer:
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
Args | |
---|---|
weights
|
a list of NumPy arrays. The number
of arrays and their shape must match
number of the dimensions of the weights
of the layer (i.e. it should match the
output of get_weights ).
|
Raises | |
---|---|
ValueError
|
If the provided weights list does not match the layer's specifications. |
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__call__
__call__(
*args, **kwargs
)
Wraps call
, applying pre- and post-processing steps.
Args | |
---|---|
*args
|
Positional arguments to be passed to self.call .
|
**kwargs
|
Keyword arguments to be passed to self.call .
|
Returns | |
---|---|
Output tensor(s). |
Note | |
---|---|
|
Raises | |
---|---|
ValueError
|
if the layer's call method returns None (an invalid
value).
|
RuntimeError
|
if super().__init__() was not called in the
constructor.
|