This function behaves just like tf.maximum, but the behavior of the gradient
with respect to inputs for input values that hit the bound depends on
gradient:
If set to 'disconnected', the returned gradient is zero for values that hit
the bound. This is identical to the behavior of tf.maximum.
If set to 'identity', the gradient is unconditionally replaced with the
identity function (i.e., pretending this function does not exist).
If set to 'identity_if_towards', the gradient is replaced with the identity
function, but only if applying gradient descent would push the values of
inputs towards the bound. For gradient values that push away from the bound,
the returned gradient is still zero.
Args
inputs
Input tensor.
bound
Lower bound for the input tensor.
gradient
'disconnected', 'identity', or 'identity_if_towards' (default).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[]]