View source on GitHub |
A class implementing ranking policies in TF Agents.
Inherits From: TFPolicy
tf_agents.bandits.policies.ranking_policy.RankingPolicy(
num_items: int,
num_slots: int,
time_step_spec: tf_agents.typing.types.TimeStep
,
network: tf_agents.typing.types.Network
,
item_sampler: tfd.Distribution,
penalty_mixture_coefficient: float = 1.0,
logits_temperature: float = 1.0,
name: Optional[Text] = None
)
The ranking policy needs at initialization the number of items per round to rank, a scorer network, and a score penalizer function. This function should ensure that similar items don't all get high scores and thus a diverse set of items is recommended.
In the case the number of items to rank varies from iteration to iteration,
the observation contains a num_actions
value, that specifies the number of
items available. Note that in this case it can happen that the number of
ranked items is less than the number of slots. Thus, consumers of the output
of policy.action
should always use the num_actions
value to determine what
part of the output is the action ranking.
If num_actions
field is not used, the policy is always presented with
num_items
many items, which should be greater than or equal to num_slots
.
Methods
action
action(
time_step: tf_agents.trajectories.TimeStep
,
policy_state: tf_agents.typing.types.NestedTensor
= (),
seed: Optional[types.Seed] = None
) -> tf_agents.trajectories.PolicyStep
Generates next action given the time_step and policy_state.
Args | |
---|---|
time_step
|
A TimeStep tuple corresponding to time_step_spec() .
|
policy_state
|
A Tensor, or a nested dict, list or tuple of Tensors representing the previous policy_state. |
seed
|
Seed to use if action performs sampling (optional). |
Returns | |
---|---|
A PolicyStep named tuple containing:
action : An action Tensor matching the action_spec .
state : A policy state tensor to be fed into the next call to action.
info : Optional side information such as action log probabilities.
|
Raises | |
---|---|
RuntimeError
|
If subclass init didn't call super().init.
ValueError or TypeError: If validate_args is True and inputs or
outputs do not match time_step_spec , policy_state_spec ,
or policy_step_spec .
|
distribution
distribution(
time_step: tf_agents.trajectories.TimeStep
,
policy_state: tf_agents.typing.types.NestedTensor
= ()
) -> tf_agents.trajectories.PolicyStep
Generates the distribution over next actions given the time_step.
Args | |
---|---|
time_step
|
A TimeStep tuple corresponding to time_step_spec() .
|
policy_state
|
A Tensor, or a nested dict, list or tuple of Tensors representing the previous policy_state. |
Returns | |
---|---|
A PolicyStep named tuple containing:
|
Raises | |
---|---|
ValueError or TypeError: If validate_args is True and inputs or
outputs do not match time_step_spec , policy_state_spec ,
or policy_step_spec .
|
get_initial_state
get_initial_state(
batch_size: Optional[types.Int]
) -> tf_agents.typing.types.NestedTensor
Returns an initial state usable by the policy.
Args | |
---|---|
batch_size
|
Tensor or constant: size of the batch dimension. Can be None in which case no dimensions gets added. |
Returns | |
---|---|
A nested object of type policy_state containing properly
initialized Tensors.
|
update
update(
policy,
tau: float = 1.0,
tau_non_trainable: Optional[float] = None,
sort_variables_by_name: bool = False
) -> tf.Operation
Update the current policy with another policy.
This would include copying the variables from the other policy.
Args | |
---|---|
policy
|
Another policy it can update from. |
tau
|
A float scalar in [0, 1]. When tau is 1.0 (the default), we do a hard update. This is used for trainable variables. |
tau_non_trainable
|
A float scalar in [0, 1] for non_trainable variables. If None, will copy from tau. |
sort_variables_by_name
|
A bool, when True would sort the variables by name before doing the update. |
Returns | |
---|---|
An TF op to do the update. |