View source on GitHub |
Computes the contrastive loss between y_true
and y_pred
.
@tf.function
tfa.losses.contrastive_loss( y_true:
tfa.types.TensorLike
, y_pred:tfa.types.TensorLike
, margin:tfa.types.Number
= 1.0 ) -> tf.Tensor
This loss encourages the embedding to be close to each other for the samples of the same label and the embedding to be far apart at least by the margin constant for the samples of different labels.
The euclidean distances y_pred
between two embedding matrices
a
and b
with shape [batch_size, hidden_size]
can be computed
as follows:
a = tf.constant([[1, 2],
[3, 4],
[5, 6]], dtype=tf.float16)
b = tf.constant([[5, 9],
[3, 6],
[1, 8]], dtype=tf.float16)
y_pred = tf.linalg.norm(a - b, axis=1)
y_pred
<tf.Tensor: shape=(3,), dtype=float16, numpy=array([8.06 , 2. , 4.473],
dtype=float16)>
<... Note: constants a & b have been used purely for example purposes and have no significant value ...>
See: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
Returns | |
---|---|
contrastive_loss
|
1-D float Tensor with shape [batch_size] .
|