View source on GitHub |
Creates a baseline task for autoencoding on EMNIST.
tff.simulation.baselines.emnist.create_autoencoder_task(
train_client_spec: tff.simulation.baselines.ClientSpec
,
eval_client_spec: Optional[tff.simulation.baselines.ClientSpec
] = None,
only_digits: bool = False,
cache_dir: Optional[str] = None,
use_synthetic_data: bool = False
) -> tff.simulation.baselines.BaselineTask
This task involves performing autoencoding on the EMNIST dataset using a
densely connected bottleneck network. The model uses 8 layers of widths
[1000, 500, 250, 30, 250, 500, 1000, 784]
, with the final layer being the
output layer. Each layer uses a sigmoid activation function, except the
smallest layer, which uses a linear activation function.
The goal of the task is to minimize the mean squared error between the input to the network and the output of the network.
Args | |
---|---|
train_client_spec
|
A tff.simulation.baselines.ClientSpec specifying how to
preprocess train client data.
|
eval_client_spec
|
An optional tff.simulation.baselines.ClientSpec
specifying how to preprocess evaluation client data. If set to None , the
evaluation datasets will use a batch size of 64 with no extra
preprocessing.
|
only_digits
|
A boolean indicating whether to use the full EMNIST-62 dataset
containing 62 alphanumeric classes (True ) or the smaller EMNIST-10
dataset with only 10 numeric classes (False ).
|
cache_dir
|
An optional directory to cache the downloadeded datasets. If
None , they will be cached to ~/.tff/ .
|
use_synthetic_data
|
A boolean indicating whether to use synthetic EMNIST data. This option should only be used for testing purposes, in order to avoid downloading the entire EMNIST dataset. |
Returns | |
---|---|
A tff.simulation.baselines.BaselineTask .
|