Update '*var' according to the adadelta scheme.
accum = rho() * accum + (1 - rho()) * grad.square(); update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; update_accum = rho() * update_accum + (1 - rho()) * update.square(); var -= update;
Nested Classes
class | ApplyAdadelta.Options | Optional attributes for ApplyAdadelta
|
Constants
String | OP_NAME | The name of this op, as known by TensorFlow core engine |
Public Methods
Output<T> |
asOutput()
Returns the symbolic handle of the tensor.
|
static <T extends TType> ApplyAdadelta<T> | |
Output<T> |
out()
Same as "var".
|
static ApplyAdadelta.Options |
useLocking(Boolean useLocking)
|
Inherited Methods
Constants
public static final String OP_NAME
The name of this op, as known by TensorFlow core engine
Public Methods
public Output<T> asOutput ()
Returns the symbolic handle of the tensor.
Inputs to TensorFlow operations are outputs of another TensorFlow operation. This method is used to obtain a symbolic handle that represents the computation of the input.
public static ApplyAdadelta<T> create (Scope scope, Operand<T> var, Operand<T> accum, Operand<T> accumUpdate, Operand<T> lr, Operand<T> rho, Operand<T> epsilon, Operand<T> grad, Options... options)
Factory method to create a class wrapping a new ApplyAdadelta operation.
Parameters
scope | current scope |
---|---|
var | Should be from a Variable(). |
accum | Should be from a Variable(). |
accumUpdate | Should be from a Variable(). |
lr | Scaling factor. Must be a scalar. |
rho | Decay factor. Must be a scalar. |
epsilon | Constant factor. Must be a scalar. |
grad | The gradient. |
options | carries optional attributes values |
Returns
- a new instance of ApplyAdadelta
public static ApplyAdadelta.Options useLocking (Boolean useLocking)
Parameters
useLocking | If True, updating of the var, accum and update_accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. |
---|