View source on GitHub |
Computes the cosine similarity between labels and predictions.
tf.keras.losses.cosine_similarity(
y_true, y_pred, axis=-1
)
Note that it is a negative quantity between -1 and 0, where 0 indicates
orthogonality and values closer to -1 indicate greater similarity. This makes
it usable as a loss function in a setting where you try to maximize the
proximity between predictions and targets. If either y_true
or y_pred
is a zero vector, cosine similarity will be 0 regardless of the proximity
between predictions and targets.
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Usage:
y_true = [[0., 1.], [1., 1.]]
y_pred =[[1., 0.], [1., 1.]]
loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)
# l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]]
# l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]]
# l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]
# loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))
# = ((0. + 0.) + (0.5 + 0.5)) / 2
loss.numpy()
array([-0., -0.999], dtype=float32)
Args | |
---|---|
y_true
|
Tensor of true targets. |
y_pred
|
Tensor of predicted targets. |
axis
|
Axis along which to determine similarity. |
Returns | |
---|---|
Cosine similarity tensor. |