tf.nn.relu6
Stay organized with collections
Save and categorize content based on your preferences.
Computes Rectified Linear 6: min(max(features, 0), 6)
.
tf.nn.relu6(
features, name=None
)
In comparison with tf.nn.relu
, relu6 activation functions have shown to
empirically perform better under low-precision conditions (e.g. fixed point
inference) by encouraging the model to learn sparse features earlier.
Source: Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al.,
2010.
For example:
x = tf.constant([-3.0, -1.0, 0.0, 6.0, 10.0], dtype=tf.float32)
y = tf.nn.relu6(x)
y.numpy()
array([0., 0., 0., 6., 6.], dtype=float32)
Args |
features
|
A Tensor with type float , double , int32 , int64 , uint8 ,
int16 , or int8 .
|
name
|
A name for the operation (optional).
|
Returns |
A Tensor with the same type as features .
|
References |
Convolutional Deep Belief Networks on CIFAR-10:
Krizhevsky et al., 2010
(pdf)
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-10-06 UTC."],[],[]]