Update '*var' as FOBOS algorithm with fixed learning rate.
tf.raw_ops.ResourceApplyProximalGradientDescent(
var, alpha, l1, l2, delta, use_locking=False, name=None
)
prox_v = var - alpha * delta var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}
Args | |
---|---|
var
|
A Tensor of type resource . Should be from a Variable().
|
alpha
|
A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , complex64 , int64 , qint8 , quint8 , qint32 , bfloat16 , uint16 , complex128 , half , uint32 , uint64 .
Scaling factor. Must be a scalar.
|
l1
|
A Tensor . Must have the same type as alpha .
L1 regularization. Must be a scalar.
|
l2
|
A Tensor . Must have the same type as alpha .
L2 regularization. Must be a scalar.
|
delta
|
A Tensor . Must have the same type as alpha . The change.
|
use_locking
|
An optional bool . Defaults to False .
If True, the subtraction will be protected by a lock;
otherwise the behavior is undefined, but may exhibit less contention.
|
name
|
A name for the operation (optional). |
Returns | |
---|---|
The created Operation. |