tff.learning.optimizers.build_sgdm
Stay organized with collections
Save and categorize content based on your preferences.
Returns a tff.learning.optimizers.Optimizer
for momentum SGD.
tff.learning.optimizers.build_sgdm(
learning_rate: optimizer.Float = 0.01,
momentum: Optional[optimizer.Float] = None
) -> tff.learning.optimizers.Optimizer
Used in the notebooks
This class supports the simple gradient descent and its variant with momentum.
If momentum is not used, the update rule given learning rate lr
, weights w
and gradients g
is:
w = w - lr * g
If momentum m
(a float between 0.0
and 1.0
) is used, the update rule is
v = m * v + g
w = w - lr * v
where v
is the velocity from previous steps of the optimizer.
Args |
learning_rate
|
A positive float for learning rate, default to 0.01.
|
momentum
|
An optional float between 0.0 and 1.0. If None , no momentum is
used.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-20 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-20 UTC."],[],[]]