View source on GitHub |
Computes Yeti logistic loss between y_true
and y_pred
.
tfr.keras.losses.YetiLogisticLoss(
reduction: tf.losses.Reduction = tf.losses.Reduction.AUTO,
name: Optional[str] = None,
lambda_weight: Optional[tfr.keras.losses.YetiDCGLambdaWeight
] = None,
temperature: float = 0.1,
sample_size: int = 8,
gumbel_temperature: float = 1.0,
seed: Optional[int] = None,
ragged: bool = False
)
Adapted to neural network models from the Yeti loss implemenation for GBDT in (Lyzhin et al, 2022).
In this code base, we support Yeti loss with the DCG lambda weight option. The default uses the YetiDCGLambdaWeight with default settings. To customize, please set the lambda_weight to YetiDCGLambdaWeight.
For each list of scores s
in y_pred
and list of labels y
in y_true
:
loss = sum_a sum_i I[y_i > y_{i\pm 1}] * log(1 + exp(-(s^a_i - s^a_{i\pm 1})))
where
s^a_i = s_i + gumbel(0, 1)^a
Standalone usage:
y_true = [[1., 0.]]
y_pred = [[0.6, 0.8]]
loss = tfr.keras.losses.YetiLogisticLoss(sample_size=2, seed=1)
loss(y_true, y_pred).numpy()
0.90761846
# Using ragged tensors
y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])
y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])
loss = tfr.keras.losses.YetiLogisticLoss(seed=1, ragged=True)
loss(y_true, y_pred).numpy()
0.43420443
Usage with the compile()
API:
model.compile(optimizer='sgd', loss=tfr.keras.losses.YetiLogisticLoss())
Definition:
\[ \mathcal{L}(\{y\}, \{s\}) = \sum_a \sum_i \sum_{j=i\pm 1}I[y_i > y_j] \log(1 + \exp(-(s^a_i - s^a_j))) \]
References | |
---|---|
Args | |
---|---|
reduction
|
Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO . AUTO indicates that the
reduction option will be determined by the usage context. For
almost all cases this defaults to SUM_OVER_BATCH_SIZE . When
used under a tf.distribute.Strategy , except via
Model.compile() and Model.fit() , using AUTO or
SUM_OVER_BATCH_SIZE will raise an error. Please see this
custom training tutorial
for more details.
|
name
|
Optional name for the instance. |
Methods
from_config
@classmethod
from_config( config, custom_objects=None )
Instantiates a Loss
from its config (output of get_config()
).
Args | |
---|---|
config
|
Output of get_config() .
|
Returns | |
---|---|
A Loss instance.
|
get_config
get_config() -> Dict[str, Any]
Returns the config dictionary for a Loss
instance.
__call__
__call__(
y_true: tfr.keras.model.TensorLike
,
y_pred: tfr.keras.model.TensorLike
,
sample_weight: Optional[utils.TensorLike] = None
) -> tf.Tensor
See _RankingLoss.