View source on GitHub |
Stop training when a monitored metric has stopped improving.
Inherits From: Callback
tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False,
start_from_epoch=0
)
Assuming the goal of a training is to minimize the loss. With this, the
metric to be monitored would be 'loss'
, and mode would be 'min'
. A
model.fit()
training loop will check at end of every epoch whether
the loss is no longer decreasing, considering the min_delta
and
patience
if applicable. Once it's found no longer decreasing,
model.stop_training
is marked True and the training terminates.
The quantity to be monitored needs to be available in logs
dict.
To make it so, pass the loss or metrics at model.compile()
.
Example:
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
# This callback will stop the training when there is no improvement in
# the loss for three consecutive epochs.
model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
model.compile(tf.keras.optimizers.SGD(), loss='mse')
history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),
epochs=10, batch_size=1, callbacks=[callback],
verbose=0)
len(history.history['loss']) # Only 4 epochs are run.
4
Methods
get_monitor_value
get_monitor_value(
logs
)
set_model
set_model(
model
)
set_params
set_params(
params
)