View source on GitHub |
Lecun normal initializer.
Inherits From: VarianceScaling
tf.keras.initializers.LecunNormal(
seed=None
)
Also available via the shortcut function
tf.keras.initializers.lecun_normal
.
Initializers allow you to pre-specify an initialization strategy, encoded in the Initializer object, without knowing the shape and dtype of the variable being initialized.
Draws samples from a truncated normal distribution centered on 0 with stddev
= sqrt(1 / fan_in)
where fan_in
is the number of input units in the weight
tensor.
Examples:
# Standalone usage:
initializer = tf.keras.initializers.LecunNormal()
values = initializer(shape=(2, 2))
# Usage in a Keras layer:
initializer = tf.keras.initializers.LecunNormal()
layer = tf.keras.layers.Dense(3, kernel_initializer=initializer)
Arguments | |
---|---|
seed
|
A Python integer. Used to seed the random generator. |
References:
- Self-Normalizing Neural Networks, Klambauer et al., 2017 (pdf)
- Efficient Backprop, Lecun et al., 1998
Methods
from_config
@classmethod
from_config( config )
Instantiates an initializer from a configuration dictionary.
Example:
initializer = RandomUniform(-1, 1)
config = initializer.get_config()
initializer = RandomUniform.from_config(config)
Args | |
---|---|
config
|
A Python dictionary.
It will typically be the output of get_config .
|
Returns | |
---|---|
An Initializer instance. |
get_config
get_config()
Returns the configuration of the initializer as a JSON-serializable dict.
Returns | |
---|---|
A JSON-serializable Python dict. |
__call__
__call__(
shape, dtype=None
)
Returns a tensor object initialized as specified by the initializer.
Args | |
---|---|
shape
|
Shape of the tensor. |
dtype
|
Optional dtype of the tensor. Only floating point types are
supported. If not specified, tf.keras.backend.floatx() is used,
which default to float32 unless you configured it otherwise
(via tf.keras.backend.set_floatx(float_dtype) )
|