Computes the mean squared logarithmic error between y_true
and y_pred
.
View aliases
Main aliases
tf.keras.losses.mean_squared_logarithmic_error
, tf.keras.losses.msle
, tf.keras.metrics.MSLE
, tf.keras.metrics.mean_squared_logarithmic_error
, tf.keras.metrics.msle
, tf.losses.MSLE
, tf.losses.mean_squared_logarithmic_error
, tf.losses.msle
, tf.metrics.MSLE
, tf.metrics.mean_squared_logarithmic_error
, tf.metrics.msle
Compat aliases for migration
See
Migration guide for
more details.
tf.compat.v1.keras.losses.MSLE
, tf.compat.v1.keras.losses.mean_squared_logarithmic_error
, tf.compat.v1.keras.losses.msle
, tf.compat.v1.keras.metrics.MSLE
, tf.compat.v1.keras.metrics.mean_squared_logarithmic_error
, tf.compat.v1.keras.metrics.msle
tf.keras.losses.MSLE(
y_true, y_pred
)
loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)
Standalone usage:
y_true = np.random.randint(0, 2, size=(2, 3))
y_pred = np.random.random(size=(2, 3))
loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred)
assert loss.shape == (2,)
y_true = np.maximum(y_true, 1e-7)
y_pred = np.maximum(y_pred, 1e-7)
assert np.allclose(
loss.numpy(),
np.mean(
np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1))
Args |
y_true
|
Ground truth values. shape = [batch_size, d0, .. dN] .
|
y_pred
|
The predicted values. shape = [batch_size, d0, .. dN] .
|
Returns |
Mean squared logarithmic error values. shape = [batch_size, d0, .. dN-1] .
|