Update '*var' by subtracting 'alpha' * 'delta' from it.
tf.raw_ops.ApplyGradientDescent(
var, alpha, delta, use_locking=False, name=None
)
Args |
var
|
A mutable Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , complex64 , int64 , qint8 , quint8 , qint32 , bfloat16 , uint16 , complex128 , half , uint32 , uint64 .
Should be from a Variable().
|
alpha
|
A Tensor . Must have the same type as var .
Scaling factor. Must be a scalar.
|
delta
|
A Tensor . Must have the same type as var . The change.
|
use_locking
|
An optional bool . Defaults to False .
If True , the subtraction will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
|
name
|
A name for the operation (optional).
|
Returns |
A mutable Tensor . Has the same type as var .
|