View source on GitHub |
Returns a DPOptimizerClass
cls
using the GaussianSumQuery
.
tf_privacy.DPKerasAdamOptimizer(
l2_norm_clip: float,
noise_multiplier: float,
num_microbatches: Optional[int] = None,
gradient_accumulation_steps: int = 1,
*args,
**kwargs
)
This function is a thin wrapper around
make_keras_optimizer_class.<locals>.DPOptimizerClass
which can be used to
apply a GaussianSumQuery
to any DPOptimizerClass
.
When combined with stochastic gradient descent, this creates the canonical DP-SGD algorithm of "Deep Learning with Differential Privacy" (see https://arxiv.org/abs/1607.00133).
When instantiating this optimizer, you need to supply several
DP-related arguments followed by the standard arguments for
{short_base_class}
.
As an example, see the below or the documentation of the DPOptimizerClass.
# Create optimizer.
opt = {dp_keras_class}(l2_norm_clip=1.0, noise_multiplier=0.5,
num_microbatches=1, <standard arguments>)