View source on GitHub |
Same as tf.maximum
, but with helpful gradient for inputs < bound
.
tfc.ops.lower_bound(
inputs,
bound,
gradient='identity_if_towards',
name='lower_bound'
)
This function behaves just like tf.maximum
, but the behavior of the gradient
with respect to inputs
for input values that hit the bound depends on
gradient
:
If set to 'disconnected'
, the returned gradient is zero for values that hit
the bound. This is identical to the behavior of tf.maximum
.
If set to 'identity'
, the gradient is unconditionally replaced with the
identity function (i.e., pretending this function does not exist).
If set to 'identity_if_towards'
, the gradient is replaced with the identity
function, but only if applying gradient descent would push the values of
inputs
towards the bound. For gradient values that push away from the bound,
the returned gradient is still zero.
Args | |
---|---|
inputs
|
Input tensor. |
bound
|
Lower bound for the input tensor. |
gradient
|
'disconnected', 'identity', or 'identity_if_towards' (default). |
name
|
Name for this op. |
Returns | |
---|---|
tf.maximum(inputs, bound)
|
Raises | |
---|---|
ValueError
|
for invalid value of gradient .
|