View source on GitHub |
Gradient Boosted Trees learning algorithm.
Inherits From: GradientBoostedTreesModel
, CoreModel
, InferenceCoreModel
tfdf.keras.GradientBoostedTreesModel(
task: Optional[TaskType] = core.Task.CLASSIFICATION,
features: Optional[List[core.FeatureUsage]] = None,
exclude_non_specified_features: Optional[bool] = False,
preprocessing: Optional['tf_keras.models.Functional'] = None,
postprocessing: Optional['tf_keras.models.Functional'] = None,
training_preprocessing: Optional['tf_keras.models.Functional'] = None,
ranking_group: Optional[str] = None,
uplift_treatment: Optional[str] = None,
temp_directory: Optional[str] = None,
verbose: int = 1,
hyperparameter_template: Optional[str] = None,
advanced_arguments: Optional[tfdf.keras.AdvancedArguments
] = None,
num_threads: Optional[int] = None,
name: Optional[str] = None,
max_vocab_count: Optional[int] = 2000,
try_resume_training: Optional[bool] = True,
check_dataset: Optional[bool] = True,
tuner: Optional[tfdf.tuner.Tuner
] = None,
discretize_numerical_features: bool = False,
num_discretized_numerical_bins: int = 255,
multitask: Optional[List[MultiTaskItem]] = None,
adapt_subsample_for_maximum_training_duration: Optional[bool] = False,
allow_na_conditions: Optional[bool] = False,
apply_link_function: Optional[bool] = True,
categorical_algorithm: Optional[str] = 'CART',
categorical_set_split_greedy_sampling: Optional[float] = 0.1,
categorical_set_split_max_num_items: Optional[int] = -1,
categorical_set_split_min_item_frequency: Optional[int] = 1,
compute_permutation_variable_importance: Optional[bool] = False,
cross_entropy_ndcg_truncation: Optional[int] = 5,
dart_dropout: Optional[float] = None,
early_stopping: Optional[str] = 'LOSS_INCREASE',
early_stopping_initial_iteration: Optional[int] = 10,
early_stopping_num_trees_look_ahead: Optional[int] = 30,
focal_loss_alpha: Optional[float] = 0.5,
focal_loss_gamma: Optional[float] = 2.0,
forest_extraction: Optional[str] = 'MART',
goss_alpha: Optional[float] = 0.2,
goss_beta: Optional[float] = 0.1,
growing_strategy: Optional[str] = 'LOCAL',
honest: Optional[bool] = False,
honest_fixed_separation: Optional[bool] = False,
honest_ratio_leaf_examples: Optional[float] = 0.5,
in_split_min_examples_check: Optional[bool] = True,
keep_non_leaf_label_distribution: Optional[bool] = True,
l1_regularization: Optional[float] = 0.0,
l2_categorical_regularization: Optional[float] = 1.0,
l2_regularization: Optional[float] = 0.0,
lambda_loss: Optional[float] = 1.0,
loss: Optional[str] = 'DEFAULT',
max_depth: Optional[int] = 6,
max_num_nodes: Optional[int] = None,
maximum_model_size_in_memory_in_bytes: Optional[float] = -1.0,
maximum_training_duration_seconds: Optional[float] = -1.0,
mhld_oblique_max_num_attributes: Optional[int] = None,
mhld_oblique_sample_attributes: Optional[bool] = None,
min_examples: Optional[int] = 5,
missing_value_policy: Optional[str] = 'GLOBAL_IMPUTATION',
ndcg_truncation: Optional[int] = 5,
num_candidate_attributes: Optional[int] = -1,
num_candidate_attributes_ratio: Optional[float] = -1.0,
num_trees: Optional[int] = 300,
pure_serving_model: Optional[bool] = False,
random_seed: Optional[int] = 123456,
sampling_method: Optional[str] = 'RANDOM',
selective_gradient_boosting_ratio: Optional[float] = 0.01,
shrinkage: Optional[float] = 0.1,
sorting_strategy: Optional[str] = 'PRESORT',
sparse_oblique_max_num_projections: Optional[int] = None,
sparse_oblique_normalization: Optional[str] = None,
sparse_oblique_num_projections_exponent: Optional[float] = None,
sparse_oblique_projection_density_factor: Optional[float] = None,
sparse_oblique_weights: Optional[str] = None,
split_axis: Optional[str] = 'AXIS_ALIGNED',
subsample: Optional[float] = 1.0,
uplift_min_examples_in_treatment: Optional[int] = 5,
uplift_split_score: Optional[str] = 'KULLBACK_LEIBLER',
use_hessian_gain: Optional[bool] = False,
validation_interval_in_trees: Optional[int] = 1,
validation_ratio: Optional[float] = 0.1,
explicit_args: Optional[Set[str]] = None
)
Used in the notebooks
Used in the guide | Used in the tutorials |
---|---|
A Gradient Boosted Trees (GBT), also known as Gradient Boosted Decision Trees (GBDT) or Gradient Boosted Machines (GBM), is a set of shallow decision trees trained sequentially. Each tree is trained to predict and then "correct" for the errors of the previously trained trees (more precisely each tree predict the gradient of the loss relative to the model output).
Usage example:
import tensorflow_decision_forests as tfdf
import pandas as pd
dataset = pd.read_csv("project/dataset.csv")
tf_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(dataset, label="my_label")
model = tfdf.keras.GradientBoostedTreesModel()
model.fit(tf_dataset)
print(model.summary())
Hyper-parameter tuning:
import tensorflow_decision_forests as tfdf
import pandas as pd
dataset = pd.read_csv("project/dataset.csv")
tf_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(dataset, label="my_label")
tuner = tfdf.tuner.RandomSearch(num_trials=20)
# Hyper-parameters to optimize.
tuner.discret("max_depth", [4, 5, 6, 7])
model = tfdf.keras.GradientBoostedTreesModel(tuner=tuner)
model.fit(tf_dataset)
print(model.summary())
Attributes | ||
---|---|---|
task
|
Task to solve (e.g. Task.CLASSIFICATION, Task.REGRESSION, Task.RANKING, Task.CATEGORICAL_UPLIFT, Task.NUMERICAL_UPLIFT). | |
features
|
Specify the list and semantic of the input features of the model.
If not specified, all the available features will be used. If specified
and if exclude_non_specified_features=True , only the features in
features will be used by the model. If "preprocessing" is used,
features corresponds to the output of the preprocessing. In this case,
it is recommended for the preprocessing to return a dictionary of tensors.
|
|
exclude_non_specified_features
|
If true, only use the features specified in
features .
|
|
preprocessing
|
Functional keras model or @tf.function to apply on the input feature before the model to train. This preprocessing model can consume and return tensors, list of tensors or dictionary of tensors. If specified, the model only "sees" the output of the preprocessing (and not the raw input). Can be used to prepare the features or to stack multiple models on top of each other. Unlike preprocessing done in the tf.dataset, the operation in "preprocessing" are serialized with the model. | |
postprocessing
|
Like "preprocessing" but applied on the model output. | |
training_preprocessing
|
Functional keras model or @tf.function to apply on
the input feature, labels, and sample_weight before model training.
|
|
ranking_group
|
Only for task=Task.RANKING . Name of a tf.string feature that
identifies queries in a query/document ranking task. The ranking group
is not added automatically for the set of features if
exclude_non_specified_features=false .
|
|
uplift_treatment
|
Only for task=Task.CATEGORICAL_UPLIFT or task=Task.NUMERICAL_UPLIFT. Name of an integer feature that identifies the treatment in an uplift problem. The value 0 is reserved for the control treatment. | |
temp_directory
|
Temporary directory used to store the model Assets after the
training, and possibly as a work directory during the training. This
temporary directory is necessary for the model to be exported after
training e.g. model.save(path) . If not specified, temp_directory is
set to a temporary directory using tempfile.TemporaryDirectory . This
directory is deleted when the model python object is garbage-collected.
|
|
verbose
|
Verbosity mode. 0 = silent, 1 = small details, 2 = full details. | |
hyperparameter_template
|
Override the default value of the hyper-parameters.
If None (default) the default parameters of the library are used. If set,
default_hyperparameter_template refers to one of the following
preconfigured hyper-parameter sets. Those sets outperforms the default
hyper-parameters (either generally or in specific scenarios).
You can omit the version (e.g. remove "@v5") to use the last version of
the template. In this case, the hyper-parameter can change in between
releases (not recommended for training in production).
|
|
advanced_arguments
|
Advanced control of the model that most users won't need
to use. See AdvancedArguments for details.
|
|
num_threads
|
Number of threads used to train the model. Different learning
algorithms use multi-threading differently and with different degree of
efficiency. If None , num_threads will be automatically set to the
number of processors (up to a maximum of 32; or set to 6 if the number of
processors is not available).
Making num_threads significantly larger than the number of processors
can slow-down the training speed. The default value logic might change in
the future.
|
|
name
|
The name of the model. | |
max_vocab_count
|
Default maximum size of the vocabulary for CATEGORICAL and
CATEGORICAL_SET features stored as strings. If more unique values exist,
only the most frequent values are kept, and the remaining values are
considered as out-of-vocabulary. The value max_vocab_count defined in a
FeatureUsage (if any) takes precedence.
|
|
try_resume_training
|
If true, the model training resumes from the checkpoint
stored in the temp_directory directory. If temp_directory does not
contain any model checkpoint, the training start from the beginning.
Resuming training is useful in the following situations: (1) The training
was interrupted by the user (e.g. ctrl+c or "stop" button in a
notebook). (2) the training job was interrupted (e.g. rescheduling), ond
(3) the hyper-parameter of the model were changed such that an initially
completed training is now incomplete (e.g. increasing the number of
trees).
Note: Training can only be resumed if the training datasets is exactly the
same (i.e. no reshuffle in the tf.data.Dataset).
|
|
check_dataset
|
If set to true, test if the dataset is well configured for
the training: (1) Check if the dataset does contains any repeat
operations, (2) Check if the dataset does contain a batch operation,
(3) Check if the dataset has a large enough batch size (min 100 if the
dataset contains more than 1k examples or if the number of examples is
not available) If set to false, do not run any test.
|
|
tuner
|
If set, automatically optimize the hyperparameters of the model using this tuner. If the model is trained with distribution (i.e. the model definition is wrapper in a TF Distribution strategy, the tuning is distributed. | |
discretize_numerical_features
|
If true, discretize all the numerical
features before training. Discretized numerical features are faster to
train with, but they can have a negative impact on the model quality.
Using discretize_numerical_features=True is equivalent as setting the
feature semantic DISCRETIZED_NUMERICAL in the feature argument. See the
definition of DISCRETIZED_NUMERICAL for more details.
|
|
num_discretize_numerical_bins
|
Number of bins used when disretizing
numerical features. The value num_discretized_numerical_bins defined in
a FeatureUsage (if any) takes precedence.
|
|
multitask
|
If set, train a multi-task model, that is a model with multiple
outputs trained to predict different labels. If set, the tf.dataset label
(i.e. the second selement of the dataset) should be a dictionary of
label_key:label_values. Only one of multitask and task can be set.
|
|
adapt_subsample_for_maximum_training_duration
|
Control how the maximum training duration (if set) is applied. If false, the training stop when the time is used. If true, the size of the sampled datasets used train individual trees are adapted dynamically so that all the trees are trained in time. Default: False. | |
allow_na_conditions
|
If true, the tree training evaluates conditions of the
type X is NA i.e. X is missing . Default: False.
|
|
apply_link_function
|
If true, applies the link function (a.k.a. activation function), if any, before returning the model prediction. If false, returns the pre-link function model output. For example, in the case of binary classification, the pre-link function output is a logic while the post-link function is a probability. Default: True. | |
categorical_algorithm
|
How to learn splits on categorical attributes.
CART : CART algorithm. Find categorical splits of the form "value \in
mask". The solution is exact for binary classification, regression and
ranking. It is approximated for multi-class classification. This is a
good first algorithm to use. In case of overfitting (very small
dataset, large dictionary), the "random" algorithm is a good
alternative.ONE_HOT : One-hot encoding. Find the optimal categorical split of the
form "attribute == param". This method is similar (but more efficient)
than converting converting each possible categorical value into a
boolean feature. This method is available for comparison purpose and
generally performs worse than other alternatives.RANDOM : Best splits among a set of random candidate. Find the a
categorical split of the form "value \in mask" using a random search.
This solution can be seen as an approximation of the CART algorithm.
This method is a strong alternative to CART. This algorithm is inspired
from section "5.1 Categorical Variables" of "Random Forest", 2001.
Default: "CART".
|
|
categorical_set_split_greedy_sampling
|
For categorical set splits e.g. texts. Probability for a categorical value to be a candidate for the positive set. The sampling is applied once per node (i.e. not at every step of the greedy optimization). Default: 0.1. | |
categorical_set_split_max_num_items
|
For categorical set splits e.g. texts.
Maximum number of items (prior to the sampling). If more items are
available, the least frequent items are ignored. Changing this value is
similar to change the "max_vocab_count" before loading the dataset, with
the following exception: With max_vocab_count , all the remaining items
are grouped in a special Out-of-vocabulary item. With max_num_items ,
this is not the case. Default: -1.
|
|
categorical_set_split_min_item_frequency
|
For categorical set splits e.g. texts. Minimum number of occurrences of an item to be considered. Default: 1. | |
compute_permutation_variable_importance
|
If true, compute the permutation variable importance of the model at the end of the training using the validation dataset. Enabling this feature can increase the training time significantly. Default: False. | |
cross_entropy_ndcg_truncation
|
Truncation of the cross-entropy NDCG loss
(default 5). Only used with cross-entropy NDCG loss i.e.
loss="XE_NDCG_MART" Default: 5.
|
|
dart_dropout
|
Dropout rate applied when using the DART i.e. when forest_extraction=DART. Default: None. | |
early_stopping
|
Early stopping detects the overfitting of the model and
halts it training using the validation dataset. If not provided directly,
the validation dataset is extracted from the training dataset (see
"validation_ratio" parameter):
NONE : No early stopping. All the num_trees are trained and kept.MIN_LOSS_FINAL : All the num_trees are trained. The model is then
truncated to minimize the validation loss i.e. some of the trees are
discarded as to minimum the validation loss.LOSS_INCREASE : Classical early stopping. Stop the training when the
validation does not decrease for early_stopping_num_trees_look_ahead
trees. Default: "LOSS_INCREASE".
|
|
early_stopping_initial_iteration
|
0-based index of the first iteration considered for early stopping computation. Increasing this value prevents too early stopping due to noisy initial iterations of the learner. Default: 10. | |
early_stopping_num_trees_look_ahead
|
Rolling number of trees used to detect validation loss increase and trigger early stopping. Default: 30. | |
focal_loss_alpha
|
EXPERIMENTAL, default 0.5. Weighting parameter for focal
loss, positive samples weighted by alpha, negative samples by (1-alpha).
The default 0.5 value means no active class-level weighting. Only used
with focal loss i.e. loss="BINARY_FOCAL_LOSS" Default: 0.5.
|
|
focal_loss_gamma
|
EXPERIMENTAL, default 2.0. Exponent of the misprediction
exponent term in focal loss, corresponds to gamma parameter in
https://arxiv.org/pdf/1708.02002.pdf. Only used with focal loss i.e.
loss="BINARY_FOCAL_LOSS" Default: 2.0.
|
|
forest_extraction
|
How to construct the forest:
|
|
goss_alpha
|
Alpha parameter for the GOSS (Gradient-based One-Side Sampling; "See LightGBM: A Highly Efficient Gradient Boosting Decision Tree") sampling method. Default: 0.2. | |
goss_beta
|
Beta parameter for the GOSS (Gradient-based One-Side Sampling) sampling method. Default: 0.1. | |
growing_strategy
|
How to grow the tree.
LOCAL : Each node is split independently of the other nodes. In other
words, as long as a node satisfy the splits "constraints (e.g. maximum
depth, minimum number of observations), the node will be split. This is
the "classical" way to grow decision trees.BEST_FIRST_GLOBAL : The node with the best loss reduction among all
the nodes of the tree is selected for splitting. This method is also
called "best first" or "leaf-wise growth". See "Best-first decision
tree learning", Shi and "Additive logistic regression : A statistical
view of boosting", Friedman for more details. Default: "LOCAL".
|
|
honest
|
In honest trees, different training examples are used to infer the structure and the leaf values. This regularization technique trades examples for bias estimates. It might increase or reduce the quality of the model. See "Generalized Random Forests", Athey et al. In this paper, Honest trees are trained with the Random Forest algorithm with a sampling without replacement. Default: False. | |
honest_fixed_separation
|
For honest trees only i.e. honest=true. If true, a new random separation is generated for each tree. If false, the same separation is used for all the trees (e.g., in Gradient Boosted Trees containing multiple trees). Default: False. | |
honest_ratio_leaf_examples
|
For honest trees only i.e. honest=true. Ratio of examples used to set the leaf values. Default: 0.5. | |
in_split_min_examples_check
|
Whether to check the min_examples constraint
in the split search (i.e. splits leading to one child having less than
min_examples examples are considered invalid) or before the split
search (i.e. a node can be derived only if it contains more than
min_examples examples). If false, there can be nodes with less than
min_examples training examples. Default: True.
|
|
keep_non_leaf_label_distribution
|
Whether to keep the node value (i.e. the distribution of the labels of the training examples) of non-leaf nodes. This information is not used during serving, however it can be used for model interpretation as well as hyper parameter tuning. This can take lots of space, sometimes accounting for half of the model size. Default: True. | |
l1_regularization
|
L1 regularization applied to the training loss. Impact the tree structures and lead values. Default: 0.0. | |
l2_categorical_regularization
|
L2 regularization applied to the training loss for categorical features. Impact the tree structures and lead values. Default: 1.0. | |
l2_regularization
|
L2 regularization applied to the training loss for all features except the categorical ones. Default: 0.0. | |
lambda_loss
|
Lambda regularization applied to certain training loss functions. Only for NDCG loss. Default: 1.0. | |
loss
|
The loss optimized by the model. If not specified (DEFAULT) the loss
is selected automatically according to the \"task\" and label
statistics. For example, if task=CLASSIFICATION and the label has two
possible values, the loss will be set to BINOMIAL_LOG_LIKELIHOOD.
Possible values are:
DEFAULT : Select the loss automatically according to the task and
label statistics.BINOMIAL_LOG_LIKELIHOOD : Binomial log likelihood. Only valid for
binary classification.SQUARED_ERROR : Least square loss. Only valid for regression.POISSON : Poisson log likelihood loss. Mainly used for counting
problems. Only valid for regression.MULTINOMIAL_LOG_LIKELIHOOD : Multinomial log likelihood i.e.
cross-entropy. Only valid for binary or multi-class classification.LAMBDA_MART_NDCG : LambdaMART with NDCG@5.XE_NDCG_MART : Cross Entropy Loss NDCG. See arxiv.org/abs/1911.09798.BINARY_FOCAL_LOSS : Focal loss. Only valid for binary classification.
See https://arxiv.org/pdf/1708.02002.pdf.POISSON : Poisson log likelihood. Only valid for regression.MEAN_AVERAGE_ERROR : Mean average error a.k.a. MAE.LAMBDA_MART_NDCG5 : DEPRECATED, use LAMBDA_MART_NDCG. LambdaMART with
NDCG@5.
Default: "DEFAULT".
|
|
max_depth
|
Maximum depth of the tree. max_depth=1 means that all trees
will be roots. max_depth=-1 means that tree depth is not restricted by
this parameter. Values <= -2 will be ignored. Default: 6.
|
|
max_num_nodes
|
Maximum number of nodes in the tree. Set to -1 to disable
this limit. Only available for growing_strategy=BEST_FIRST_GLOBAL .
Default: None.
|
|
maximum_model_size_in_memory_in_bytes
|
Limit the size of the model when stored in ram. Different algorithms can enforce this limit differently. Note that when models are compiled into an inference, the size of the inference engine is generally much smaller than the original model. Default: -1.0. | |
maximum_training_duration_seconds
|
Maximum training duration of the model expressed in seconds. Each learning algorithm is free to use this parameter at it sees fit. Enabling maximum training duration makes the model training non-deterministic. Default: -1.0. | |
mhld_oblique_max_num_attributes
|
For MHLD oblique splits i.e.
split_axis=MHLD_OBLIQUE . Maximum number of attributes in the
projection. Increasing this value increases the training time. Decreasing
this value acts as a regularization. The value should be in [2,
num_numerical_features]. If the value is above the total number of
numerical features, the value is capped automatically. The value 1 is
allowed but results in ordinary (non-oblique) splits. Default: None.
|
|
mhld_oblique_sample_attributes
|
For MHLD oblique splits i.e.
split_axis=MHLD_OBLIQUE . If true, applies the attribute sampling
controlled by the "num_candidate_attributes" or
"num_candidate_attributes_ratio" parameters. If false, all the attributes
are tested. Default: None.
|
|
min_examples
|
Minimum number of examples in a node. Default: 5. | |
missing_value_policy
|
Method used to handle missing attribute values.
GLOBAL_IMPUTATION : Missing attribute values are imputed, with the
mean (in case of numerical attribute) or the most-frequent-item (in
case of categorical attribute) computed on the entire dataset (i.e. the
information contained in the data spec).LOCAL_IMPUTATION : Missing attribute values are imputed with the mean
(numerical attribute) or most-frequent-item (in the case of categorical
attribute) evaluated on the training examples in the current node.RANDOM_LOCAL_IMPUTATION : Missing attribute values are imputed from
randomly sampled values from the training examples in the current node.
This method was proposed by Clinic et al. in "Random Survival Forests"
(https://projecteuclid.org/download/pdfview_1/euclid.aoas/1223908043).
Default: "GLOBAL_IMPUTATION".
|
|
ndcg_truncation
|
Truncation of the NDCG loss (default 5). Only used with
NDCG loss i.e. loss="LAMBDA_MART_NDCG". Default: 5.
|
|
num_candidate_attributes
|
Number of unique valid attributes tested for each
node. An attribute is valid if it has at least a valid split. If
num_candidate_attributes=0 , the value is set to the classical default
value for Random Forest: sqrt(number of input attributes) in case of
classification and number_of_input_attributes / 3 in case of
regression. If num_candidate_attributes=-1 , all the attributes are
tested. Default: -1.
|
|
num_candidate_attributes_ratio
|
Ratio of attributes tested at each node. If
set, it is equivalent to num_candidate_attributes =
number_of_input_features x num_candidate_attributes_ratio . The possible
values are between ]0, and 1] as well as -1. If not set or equal to -1,
the num_candidate_attributes is used. Default: -1.0.
|
|
num_trees
|
Maximum number of decision trees. The effective number of trained tree can be smaller if early stopping is enabled. Default: 300. | |
pure_serving_model
|
Clear the model from any information that is not required for model serving. This includes debugging, model interpretation and other meta-data. The size of the serialized model can be reduced significatively (50% model size reduction is common). This parameter has no impact on the quality, serving speed or RAM usage of model serving. Default: False. | |
random_seed
|
Random seed for the training of the model. Learners are expected to be deterministic by the random seed. Default: 123456. | |
sampling_method
|
Control the sampling of the datasets used to train
individual trees.
|
|
selective_gradient_boosting_ratio
|
Ratio of the dataset used to train individual tree for the selective Gradient Boosting (Selective Gradient Boosting for Effective Learning to Rank; Lucchese et al; http://quickrank.isti.cnr.it/selective-data/selective-SIGIR2018.pdf) sampling method. Default: 0.01. | |
shrinkage
|
Coefficient applied to each tree prediction. A small value (0.02) tends to give more accurate results (assuming enough trees are trained), but results in larger models. Analogous to neural network learning rate. Fixed to 1.0 for DART models. Default: 0.1. | |
sorting_strategy
|
How are sorted the numerical features in order to find
the splits
|
|
sparse_oblique_max_num_projections
|
For sparse oblique splits i.e.
split_axis=SPARSE_OBLIQUE . Maximum number of projections (applied after
the num_projections_exponent).
Oblique splits try out max(p^num_projections_exponent,
max_num_projections) random projections for choosing a split, where p is
the number of numerical features. Increasing "max_num_projections"
increases the training time but not the inference time. In late stage
model development, if every bit of accuracy if important, increase this
value.
The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020)
does not define this hyperparameter. Default: None.
|
|
sparse_oblique_normalization
|
For sparse oblique splits i.e.
split_axis=SPARSE_OBLIQUE . Normalization applied on the features,
before applying the sparse oblique projections.
NONE : No normalization.STANDARD_DEVIATION : Normalize the feature by the estimated standard
deviation on the entire train dataset. Also known as Z-Score
normalization.MIN_MAX : Normalize the feature by the range (i.e. max-min) estimated
on the entire train dataset. Default: None.
|
|
sparse_oblique_num_projections_exponent
|
For sparse oblique splits i.e.
split_axis=SPARSE_OBLIQUE . Controls of the number of random projections
to test at each node.
Increasing this value very likely improves the quality of the model,
drastically increases the training time, and doe not impact the inference
time.
Oblique splits try out max(p^num_projections_exponent,
max_num_projections) random projections for choosing a split, where p is
the number of numerical features. Therefore, increasing this
num_projections_exponent and possibly max_num_projections may improve
model quality, but will also significantly increase training time.
Note that the complexity of (classic) Random Forests is roughly
proportional to num_projections_exponent=0.5 , since it considers
sqrt(num_features) for a split. The complexity of (classic) GBDT is
roughly proportional to num_projections_exponent=1 , since it considers
all features for a split.
The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020)
recommends values in [1/4, 2]. Default: None.
|
|
sparse_oblique_projection_density_factor
|
Density of the projections as an
exponent of the number of features. Independently for each projection,
each feature has a probability "projection_density_factor / num_features"
to be considered in the projection.
The paper "Sparse Projection Oblique Random Forests" (Tomita et al, 2020)
calls this parameter lambda and recommends values in [1, 5].
Increasing this value increases training and inference time (on average).
This value is best tuned for each dataset. Default: None.
|
|
sparse_oblique_weights
|
For sparse oblique splits i.e.
split_axis=SPARSE_OBLIQUE . Possible values:
BINARY : The oblique weights are sampled in {-1,1} (default).CONTINUOUS : The oblique weights are be sampled in [-1,1]. Default:
None.
|
|
split_axis
|
What structure of split to consider for numerical features.
AXIS_ALIGNED : Axis aligned splits (i.e. one condition at a time).
This is the "classical" way to train a tree. Default value.SPARSE_OBLIQUE : Sparse oblique splits (i.e. random splits on a small
number of features) from "Sparse Projection Oblique Random Forests",
Tomita et al., 2020.MHLD_OBLIQUE : Multi-class Hellinger Linear Discriminant splits from
"Classification Based on Multivariate Contrast Patterns",
Canete-Sifuentes et al., 2029 Default: "AXIS_ALIGNED".
|
|
subsample
|
Ratio of the dataset (sampling without replacement) used to train individual trees for the random sampling method. If \"subsample\" is set and if \"sampling_method\" is NOT set or set to \"NONE\", then \"sampling_method\" is implicitly set to \"RANDOM\". In other words, to enable random subsampling, you only need to set "\"subsample\". Default: 1.0. | |
uplift_min_examples_in_treatment
|
For uplift models only. Minimum number of examples per treatment in a node. Default: 5. | |
uplift_split_score
|
For uplift models only. Splitter score i.e. score
optimized by the splitters. The scores are introduced in "Decision trees
for uplift modeling with single and multiple treatments", Rzepakowski et
al. Notation: p probability / average value of the positive outcome,
q probability / average value in the control group.
KULLBACK_LEIBLER or KL : - p log (p/q)EUCLIDEAN_DISTANCE or ED : (p-q)^2CHI_SQUARED or CS : (p-q)^2/q
Default: "KULLBACK_LEIBLER".
|
|
use_hessian_gain
|
Use true, uses a formulation of split gain with a hessian term i.e. optimizes the splits to minimize the variance of "gradient / hessian. Available for all losses except regression. Default: False. | |
validation_interval_in_trees
|
Evaluate the model on the validation set every "validation_interval_in_trees" trees. Increasing this value reduce the cost of validation and can impact the early stopping policy (as early stopping is only tested during the validation). Default: 1. | |
validation_ratio
|
Fraction of the training dataset used for validation if not validation dataset is provided. The validation dataset, whether provided directly or extracted from the training dataset, is used to compute the validation loss, other validation metrics, and possibly trigger early stopping (if enabled). When early stopping is disabled, the validation dataset is only used for monitoring and does not influence the model directly. If the "validation_ratio" is set to 0, early stopping is disabled (i.e., it implies setting early_stopping=NONE). Default: 0.1. | |
activity_regularizer
|
Optional regularizer function for the output of this layer. | |
autotune_steps_per_execution
|
Settable property to enable tuning for steps_per_execution | |
compute_dtype
|
The dtype of the layer's computations.
This is equivalent to Layers automatically cast their inputs to the compute dtype, which
causes computations and the output to be in the compute dtype as well.
This is done by the base Layer class in Layers often perform certain internal computations in higher precision
when |
|
distribute_reduction_method
|
The method employed to reduce per-replica values during training.
Unless specified, the value "auto" will be assumed, indicating that
the reduction strategy should be chosen based on the current
running environment.
See |
|
distribute_strategy
|
The tf.distribute.Strategy this model was created under.
|
|
dtype
|
The dtype of the layer weights.
This is equivalent to |
|
dtype_policy
|
The dtype policy associated with this layer.
This is an instance of a |
|
dynamic
|
Whether the layer is dynamic (eager-only); set in the constructor. | |
input
|
Retrieves the input tensor(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer. |
|
input_spec
|
InputSpec instance(s) describing the input format for this layer.
When you create a layer subclass, you can set
Now, if you try to call the layer on an input that isn't rank 4
(for instance, an input of shape
Input checks that can be specified via
For more information, see |
|
jit_compile
|
Specify whether to compile the model with XLA.
XLA is an optimizing compiler
for machine learning. For more information on supported operations please refer to the XLA documentation. Also refer to known XLA issues for more details. |
|
layers
|
||
learner
|
Name of the learning algorithm used to train the model. | |
learner_params
|
Gets the dictionary of hyper-parameters passed in the model constructor.
Changing this dictionary will impact the training. |
|
losses
|
List of losses added using the add_loss() API.
Variable regularization tensors are created when this property is
accessed, so it is eager safe: accessing
|
|
metrics
|
Return metrics added using compile() or add_metric() .
|
|
metrics_names
|
Returns the model's display labels for all outputs.
|
|
name_scope
|
Returns a tf.name_scope instance for this class.
|
|
non_trainable_weights
|
List of all non-trainable weights tracked by this layer.
Non-trainable weights are not updated during training. They are
expected to be updated manually in |
|
num_training_examples
|
Number of training examples. | |
num_validation_examples
|
Number of validation examples. | |
output
|
Retrieves the output tensor(s) of a layer.
Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer. |
|
run_eagerly
|
Settable attribute indicating whether the model should run eagerly.
Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. |
|
steps_per_execution
|
Settable steps_per_execution variable. Requires a compiled model.
</td>
</tr><tr>
<td> submodules`
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
supports_masking
|
Whether this layer supports computing a mask using compute_mask .
|
|
trainable
|
||
trainable_weights
|
List of all trainable weights tracked by this layer.
Trainable weights are updated via gradient descent during training. |
|
training_model_id
|
Identifier of the model. | |
variable_dtype
|
Alias of Layer.dtype , the dtype of the weights.
|
|
weights
|
Returns the list of all layer variables/weights. |
Methods
add_loss
add_loss(
losses, **kwargs
)
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be
dependent on the inputs passed when calling a layer. Hence, when reusing
the same layer on different inputs a
and b
, some entries in
layer.losses
may be dependent on a
and some on b
. This method
automatically keeps track of dependencies.
This method can be used inside a subclassed layer or model's call
function, in which case losses
should be a Tensor or list of Tensors.
Example:
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
The same code works in distributed training: the input to add_loss()
is treated like a regularization loss and averaged across replicas
by the training loop (both built-in Model.fit()
and compliant custom
training loops).
The add_loss
method can also be called directly on a Functional Model
during construction. In this case, any loss Tensors passed to this Model
must be symbolic and be able to be traced back to the model's Input
s.
These losses become part of the model's topology and are tracked in
get_config
.
Example:
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
model.add_loss(tf.abs(tf.reduce_mean(x)))
If this is not the case for your loss (if, for example, your loss
references a Variable
of one of the model's layers), you can wrap your
loss in a zero-argument lambda. These losses are not tracked as part of
the model's topology since they can't be serialized.
Example:
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10)
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
Args | |
---|---|
losses
|
Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor. |
**kwargs
|
Used for backwards compatibility only. |
build
build(
input_shape
)
Builds the model based on input shapes received.
This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.
This method only exists for users who want to call model.build()
in a
standalone way (as a substitute for calling the model on real data to
build it). It will never be called by the framework (and thus it will
never throw unexpected errors in an unrelated workflow).
Args | |
---|---|
input_shape
|
Single tuple, TensorShape instance, or list/dict of
shapes, where shapes are tuples, integers, or TensorShape
instances.
|
Raises | |
---|---|
ValueError
|
In each of these cases, the user should build their model by calling it on real tensor data. |
build_from_config
build_from_config(
config
)
Builds the layer's states with the supplied config dict.
By default, this method calls the build(config["input_shape"])
method,
which creates weights based on the layer's input shape in the supplied
config. If your config contains other information needed to load the
layer's state, you should override this method.
Args | |
---|---|
config
|
Dict containing the input shape associated with this layer. |
call
call(
inputs, training=False
)
Inference of the model.
This method is used for prediction and evaluation of a trained model.
Args | |
---|---|
inputs
|
Input tensors. |
training
|
Is the model being trained. Always False. |
Returns | |
---|---|
Model predictions. |
call_get_leaves
call_get_leaves(
inputs
)
Computes the index of the active leaf in each tree.
The active leaf is the leave that that receive the example during inference.
The returned value "leaves[i,j]" is the index of the active leave for the i-th example and the j-th tree. Leaves are indexed by depth first exploration with the negative child visited before the positive one (similarly as "iterate_on_nodes()" iteration). Leaf indices are also available with LeafNode.leaf_idx.
Args | |
---|---|
inputs
|
Input tensors. Same signature as the model's "call(inputs)". |
Returns | |
---|---|
Index of the active leaf for each tree in the model. |
capabilities
@staticmethod
capabilities() -> abstract_learner_pb2.LearnerCapabilities
Lists the capabilities of the learning algorithm.
collect_data_step
collect_data_step(
data, is_training_example
)
Collect examples e.g. training or validation.
compile
compile(
metrics=None, weighted_metrics=None, **kwargs
)
Configure the model for training.
Unlike for most Keras model, calling "compile" is optional before calling "fit".
Args | |
---|---|
metrics
|
List of metrics to be evaluated by the model during training and testing. |
weighted_metrics
|
List of metrics to be evaluated and weighted by
sample_weight or class_weight during training and testing.
|
**kwargs
|
Other arguments passed to compile. |
Raises | |
---|---|
ValueError
|
Invalid arguments. |
compile_from_config
compile_from_config(
config
)
Compiles the model with the information given in config.
This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.
Args | |
---|---|
config
|
Dict containing information for compiling the model. |
compute_loss
compute_loss(
x=None, y=None, y_pred=None, sample_weight=None
)
Compute the total loss, validate it, and return it.
Subclasses can optionally override this method to provide custom loss computation logic.
Example:
class MyModel(tf.keras.Model):
def __init__(self, *args, **kwargs):
super(MyModel, self).__init__(*args, **kwargs)
self.loss_tracker = tf.keras.metrics.Mean(name='loss')
def compute_loss(self, x, y, y_pred, sample_weight):
loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y))
loss += tf.add_n(self.losses)
self.loss_tracker.update_state(loss)
return loss
def reset_metrics(self):
self.loss_tracker.reset_states()
@property
def metrics(self):
return [self.loss_tracker]
tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,))
dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)
inputs = tf.keras.layers.Input(shape=(10,), name='my_input')
outputs = tf.keras.layers.Dense(10)(inputs)
model = MyModel(inputs, outputs)
model.add_loss(tf.reduce_sum(outputs))
optimizer = tf.keras.optimizers.SGD()
model.compile(optimizer, loss='mse', steps_per_execution=10)
model.fit(dataset, epochs=2, steps_per_epoch=10)
print('My custom loss: ', model.loss_tracker.result().numpy())
Args | |
---|---|
x
|
Input data. |
y
|
Target data. |
y_pred
|
Predictions returned by the model (output of model(x) )
|
sample_weight
|
Sample weights for weighting the loss function. |
Returns | |
---|---|
The total loss as a tf.Tensor , or None if no loss results (which
is the case when called by Model.test_step ).
|
compute_mask
compute_mask(
inputs, mask=None
)
Computes an output mask tensor.
Args | |
---|---|
inputs
|
Tensor or list of tensors. |
mask
|
Tensor or list of tensors. |
Returns | |
---|---|
None or a tensor (or list of tensors, one per output tensor of the layer). |
compute_metrics
compute_metrics(
x, y, y_pred, sample_weight
)
Update metric states and collect all metrics to be returned.
Subclasses can optionally override this method to provide custom metric updating and collection logic.
Example:
class MyModel(tf.keras.Sequential):
def compute_metrics(self, x, y, y_pred, sample_weight):
# This super call updates `self.compiled_metrics` and returns
# results for all metrics listed in `self.metrics`.
metric_results = super(MyModel, self).compute_metrics(
x, y, y_pred, sample_weight)
# Note that `self.custom_metric` is not listed in `self.metrics`.
self.custom_metric.update_state(x, y, y_pred, sample_weight)
metric_results['custom_metric_name'] = self.custom_metric.result()
return metric_results
Args | |
---|---|
x
|
Input data. |
y
|
Target data. |
y_pred
|
Predictions returned by the model (output of model.call(x) )
|
sample_weight
|
Sample weights for weighting the loss function. |
Returns | |
---|---|
A dict containing values that will be passed to
tf.keras.callbacks.CallbackList.on_train_batch_end() . Typically, the
values of the metrics listed in self.metrics are returned. Example:
{'loss': 0.2, 'accuracy': 0.7} .
|
compute_output_shape
compute_output_shape(
input_shape
)
Computes the output shape of the layer.
This method will cause the layer's state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.
Args | |
---|---|
input_shape
|
Shape tuple (tuple of integers) or tf.TensorShape ,
or structure of shape tuples / tf.TensorShape instances
(one per output tensor of the layer).
Shape tuples can include None for free dimensions,
instead of an integer.
|
Returns | |
---|---|
A tf.TensorShape instance
or structure of tf.TensorShape instances.
|
count_params
count_params()
Count the total number of scalars composing the weights.
Returns | |
---|---|
An integer count. |
Raises | |
---|---|
ValueError
|
if the layer isn't yet built (in which case its weights aren't yet defined). |
evaluate
evaluate(
x=None,
y=None,
batch_size=None,
verbose='auto',
sample_weight=None,
steps=None,
callbacks=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
return_dict=False,
**kwargs
)
Returns the loss value & metrics values for the model in test mode.
Computation is done in batches (see the batch_size
arg.)
Args | |
---|---|
x
|
Input data. It could be:
|
y
|
Target data. Like the input data x , it could be either Numpy
array(s) or TensorFlow tensor(s). It should be consistent with x
(you cannot have Numpy inputs and tensor targets, or inversely).
If x is a dataset, generator or keras.utils.Sequence instance,
y should not be specified (since targets will be obtained from
the iterator/dataset).
|
batch_size
|
Integer or None . Number of samples per batch of
computation. If unspecified, batch_size will default to 32. Do
not specify the batch_size if your data is in the form of a
dataset, generators, or keras.utils.Sequence instances (since
they generate batches).
|
verbose
|
"auto" , 0, 1, or 2. Verbosity mode.
0 = silent, 1 = progress bar, 2 = single line.
"auto" becomes 1 for most cases, and to 2 when used with
ParameterServerStrategy . Note that the progress bar is not
particularly useful when logged to a file, so verbose=2 is
recommended when not running interactively (e.g. in a production
environment). Defaults to 'auto'.
|
sample_weight
|
Optional Numpy array of weights for the test samples,
used for weighting the loss function. You can either pass a flat
(1D) Numpy array with the same length as the input samples
(1:1 mapping between weights and samples), or in the case of
temporal data, you can pass a 2D array with shape (samples,
sequence_length) , to apply a different weight to every
timestep of every sample. This argument is not supported when
x is a dataset, instead pass sample weights as the third
element of x .
|
steps
|
Integer or None . Total number of steps (batches of samples)
before declaring the evaluation round finished. Ignored with the
default value of None . If x is a tf.data dataset and steps
is None, 'evaluate' will run until the dataset is exhausted. This
argument is not supported with array inputs.
|
callbacks
|
List of keras.callbacks.Callback instances. List of
callbacks to apply during evaluation. See
callbacks.
|
max_queue_size
|
Integer. Used for generator or
keras.utils.Sequence input only. Maximum size for the generator
queue. If unspecified, max_queue_size will default to 10.
|
workers
|
Integer. Used for generator or keras.utils.Sequence input
only. Maximum number of processes to spin up when using
process-based threading. If unspecified, workers will default to
1.
|
use_multiprocessing
|
Boolean. Used for generator or
keras.utils.Sequence input only. If True , use process-based
threading. If unspecified, use_multiprocessing will default to
False . Note that because this implementation relies on
multiprocessing, you should not pass non-pickleable arguments to
the generator as they can't be passed easily to children
processes.
|
return_dict
|
If True , loss and metric results are returned as a
dict, with each key being the name of the metric. If False , they
are returned as a list.
|
**kwargs
|
Unused at this time. |
See the discussion of Unpacking behavior for iterator-like inputs
for
Model.fit
.
Returns | |
---|---|
Scalar test loss (if the model has a single output and no metrics)
or list of scalars (if the model has multiple outputs
and/or metrics). The attribute model.metrics_names will give you
the display labels for the scalar outputs.
|
Raises | |
---|---|
RuntimeError
|
If model.evaluate is wrapped in a tf.function .
|
export
export(
filepath
)
Create a SavedModel artifact for inference (e.g. via TF-Serving).
This method lets you export a model to a lightweight SavedModel artifact
that contains the model's forward pass only (its call()
method)
and can be served via e.g. TF-Serving. The forward pass is registered
under the name serve()
(see example below).
The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact -- it is entirely standalone.
Args | |
---|---|
filepath
|
str or pathlib.Path object. Path where to save
the artifact.
|
Example:
# Create the artifact
model.export("path/to/location")
# Later, in a different process / environment...
reloaded_artifact = tf.saved_model.load("path/to/location")
predictions = reloaded_artifact.serve(input_data)
If you would like to customize your serving endpoints, you can
use the lower-level keras.export.ExportArchive
class. The export()
method relies on ExportArchive
internally.
fit
fit(
x=None,
y=None,
callbacks=None,
verbose: Optional[Any] = None,
validation_steps: Optional[int] = None,
validation_data: Optional[Any] = None,
sample_weight: Optional[Any] = None,
steps_per_epoch: Optional[Any] = None,
class_weight: Optional[Any] = None,
**kwargs
) -> tf_keras.callbacks.History
Trains the model.
Local training
It is recommended to use a Pandas Dataframe dataset and to convert it to
a TensorFlow dataset with pd_dataframe_to_tf_dataset()
:
pd_dataset = pandas.Dataframe(...)
tf_dataset = pd_dataframe_to_tf_dataset(dataset, label="my_label")
model.fit(pd_dataset)
The following dataset formats are supported:
"x" is a
tf.data.Dataset
containing a tuple "(features, labels)". "features" can be a dictionary a tensor, a list of tensors or a dictionary of tensors (recommended). "labels" is a tensor."x" is a tensor, list of tensors or dictionary of tensors containing the input features. "y" is a tensor.
"x" is a numpy-array, list of numpy-arrays or dictionary of numpy-arrays containing the input features. "y" is a numpy-array.
- The dataset need to be read exactly once. If you use a TensorFlow dataset, make sure NOT to add a "repeat" operation.
- The algorithm does not benefit from shuffling the dataset. If you use a TensorFlow dataset, make sure NOT to add a "shuffle" operation.
- The dataset needs to be batched (i.e. with a "batch" operation). However, the number of elements per batch has not impact on the model. Generally, it is recommended to use batches as large as possible as its speeds-up reading the dataset in TensorFlow.
Input features do not need to be normalized (e.g. dividing numerical values by the variance) or indexed (e.g. replacing categorical string values by an integer). Additionally, missing values can be consumed natively.
Distributed training
Some of the learning algorithms will support distributed training with the ParameterServerStrategy.
In this case, the dataset is read asynchronously in between the workers. The distribution of the training depends on the learning algorithm.
Like for non-distributed training, the dataset should be read exactly once. The simplest solution is to divide the dataset into different files (i.e. shards) and have each of the worker read a non overlapping subset of shards.
Currently (to be changed), the validation dataset (if provided) is simply
feed to the model.evaluate()
method. Therefore, it should satisfy Keras'
evaluate API. Notably, for distributed training, the validation dataset
should be infinite (i.e. have a repeat operation).
See https://www.tensorflow.org/decision_forests/distributed_training for more details and examples.
Here is a single example of distributed training using PSS for both dataset reading and training distribution.
def dataset_fn(context, paths, training=True):
ds_path = tf.data.Dataset.from_tensor_slices(paths)
if context is not None:
# Train on at least 2 workers.
current_worker = tfdf.keras.get_worker_idx_and_num_workers(context)
assert current_worker.num_workers > 2
# Split the dataset's examples among the workers.
ds_path = ds_path.shard(
num_shards=current_worker.num_workers,
index=current_worker.worker_idx)
def read_csv_file(path):
numerical = tf.constant([math.nan], dtype=tf.float32)
categorical_string = tf.constant([""], dtype=tf.string)
csv_columns = [
numerical, # age
categorical_string, # workclass
numerical, # fnlwgt
...
]
column_names = [
"age", "workclass", "fnlwgt", ...
]
label_name = "label"
return tf.data.experimental.CsvDataset(path, csv_columns, header=True)
ds_columns = ds_path.interleave(read_csv_file)
def map_features(*columns):
assert len(column_names) == len(columns)
features = {column_names[i]: col for i, col in enumerate(columns)}
label = label_table.lookup(features.pop(label_name))
return features, label
ds_dataset = ds_columns.map(map_features)
if not training:
dataset = dataset.repeat(None)
ds_dataset = ds_dataset.batch(batch_size)
return ds_dataset
strategy = tf.distribute.experimental.ParameterServerStrategy(...)
sharded_train_paths = [list of dataset files]
with strategy.scope():
model = DistributedGradientBoostedTreesModel()
train_dataset = strategy.distribute_datasets_from_function(
lambda context: dataset_fn(context, sharded_train_paths))
test_dataset = strategy.distribute_datasets_from_function(
lambda context: dataset_fn(context, sharded_test_paths))
model.fit(sharded_train_paths)
evaluation = model.evaluate(test_dataset, steps=num_test_examples //
batch_size)
Args | |
---|---|
x
|
Training dataset (See details above for the supported formats). |
y
|
Label of the training dataset. Only used if "x" does not contains the labels. |
callbacks
|
Callbacks triggered during the training. The training runs in a single epoch, itself run in a single step. Therefore, callback logic can be called equivalently before/after the fit function. |
verbose
|
Verbosity mode. 0 = silent, 1 = small details, 2 = full details. |
validation_steps
|
Number of steps in the evaluation dataset when
evaluating the trained model with model.evaluate() . If not specified,
evaluates the model on the entire dataset (generally recommended; not
yet supported for distributed datasets).
|
validation_data
|
Validation dataset. If specified, the learner might use this dataset to help training e.g. early stopping. |
sample_weight
|
Training weights. Note: training weights can also be
provided as the third output in a tf.data.Dataset e.g. (features,
label, weights).
|
steps_per_epoch
|
[Parameter will be removed] Number of training batch to load before training the model. Currently, only supported for distributed training. |
class_weight
|
For binary classification only. Mapping class indices
(integers) to a weight (float) value. Only available for non-Distributed
training. For maximum compatibility, feed example weights through the
tf.data.Dataset or using the weight argument of
pd_dataframe_to_tf_dataset .
|
**kwargs
|
Extra arguments passed to the core keras model's fit. Note that not all keras' model fit arguments are supported. |
Returns | |
---|---|
A History object. Its History.history attribute is not yet
implemented for decision forests algorithms, and will return empty.
All other fields are filled as usual for Keras.Mode.fit() .
|
fit_on_dataset_path
fit_on_dataset_path(
train_path: str,
label_key: Optional[str] = None,
weight_key: Optional[str] = None,
valid_path: Optional[str] = None,
dataset_format: Optional[str] = 'csv',
max_num_scanned_rows_to_accumulate_statistics: Optional[int] = 100000,
try_resume_training: Optional[bool] = True,
input_model_signature_fn: Optional[tf_core.InputModelSignatureFn] = tfdf.keras.build_default_input_model_signature
,
num_io_threads: int = 10
)
Trains the model on a dataset stored on disk.
This solution is generally more efficient and easier than loading the
dataset with a tf.Dataset
both for local and distributed training.
Usage example | |
---|---|
Local training
Distributed training
|
Args | |
---|---|
train_path
|
Path to the training dataset. Supports comma separated files, shard and glob notation. |
label_key
|
Name of the label column. |
weight_key
|
Name of the weighing column. |
valid_path
|
Path to the validation dataset. If not provided, or if the
learning algorithm does not supports/needs a validation dataset,
valid_path is ignored.
|
dataset_format
|
Format of the dataset. Should be one of the registered dataset format (see User Manual for more details). The format "csv" is always available but it is generally only suited for small datasets. |
max_num_scanned_rows_to_accumulate_statistics
|
Maximum number of examples to scan to determine the statistics of the features (i.e. the dataspec, e.g. mean value, dictionaries). (Currently) the "first" examples of the dataset are scanned (e.g. the first examples of the dataset is a single file). Therefore, it is important that the sampled dataset is relatively uniformly sampled, notably the scanned examples should contains all the possible categorical values (otherwise the not seen value will be treated as out-of-vocabulary). If set to None, the entire dataset is scanned. This parameter has no effect if the dataset is stored in a format that already contains those values. |
try_resume_training
|
If true, tries to resume training from the model
checkpoint stored in the temp_directory directory. If temp_directory
does not contain any model checkpoint, start the training from the
start. Works in the following three situations: (1) The training was
interrupted by the user (e.g. ctrl+c). (2) the training job was
interrupted (e.g. rescheduling), ond (3) the hyper-parameter of the
model were changed such that an initially completed training is now
incomplete (e.g. increasing the number of trees).
|
input_model_signature_fn
|
A lambda that returns the
(Dense,Sparse,Ragged)TensorSpec (or structure of TensorSpec e.g.
dictionary, list) corresponding to input signature of the model. If not
specified, the input model signature is created by
build_default_input_model_signature . For example, specify
input_model_signature_fn if an numerical input feature (which is
consumed as DenseTensorSpec(float32) by default) will be feed
differently (e.g. RaggedTensor(int64)).
|
num_io_threads
|
Number of threads to use for IO operations e.g. reading a dataset from disk. Increasing this value can speed-up IO operations when IO operations are either latency or cpu bounded. |
Returns | |
---|---|
A History object. Its History.history attribute is not yet
implemented for decision forests algorithms, and will return empty.
All other fields are filled as usual for Keras.Mode.fit() .
|
from_config
@classmethod
from_config( config, custom_objects=None )
Creates a layer from its config.
This method is the reverse of get_config
,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights
).
Args | |
---|---|
config
|
A Python dictionary, typically the output of get_config. |
Returns | |
---|---|
A layer instance. |
get_build_config
get_build_config()
Returns a dictionary with the layer's input shape.
This method returns a config dict that can be used by
build_from_config(config)
to create all states (e.g. Variables and
Lookup tables) needed by the layer.
By default, the config only contains the input shape that the layer was built with. If you're writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when TF-Keras attempts to load its value upon model loading.
Returns | |
---|---|
A dict containing the input shape associated with the layer. |
get_compile_config
get_compile_config()
Returns a serialized config with information for compiling the model.
This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.
Returns | |
---|---|
A dict containing information for compiling the model. |
get_config
get_config()
Not supported by TF-DF, returning empty directory to avoid warnings.
get_layer
get_layer(
name=None, index=None
)
Retrieves a layer based on either its name (unique) or index.
If name
and index
are both provided, index
will take precedence.
Indices are based on order of horizontal graph traversal (bottom-up).
Args | |
---|---|
name
|
String, name of layer. |
index
|
Integer, index of layer. |
Returns | |
---|---|
A layer instance. |
get_metrics_result
get_metrics_result()
Returns the model's metrics values as a dict.
If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.
Returns | |
---|---|
A dict containing values of the metrics listed in self.metrics .
|
|
Example
|
{'loss': 0.2, 'accuracy': 0.7} .
|
get_weight_paths
get_weight_paths()
Retrieve all the variables and their paths for the model.
The variable path (string) is a stable key to identify a tf.Variable
instance owned by the model. It can be used to specify variable-specific
configurations (e.g. DTensor, quantization) from a global view.
This method returns a dict with weight object paths as keys
and the corresponding tf.Variable
instances as values.
Note that if the model is a subclassed model and the weights haven't been initialized, an empty dict will be returned.
Returns | |
---|---|
A dict where keys are variable paths and values are tf.Variable
instances.
|
Example:
class SubclassModel(tf.keras.Model):
def __init__(self, name=None):
super().__init__(name=name)
self.d1 = tf.keras.layers.Dense(10)
self.d2 = tf.keras.layers.Dense(20)
def call(self, inputs):
x = self.d1(inputs)
return self.d2(x)
model = SubclassModel()
model(tf.zeros((10, 10)))
weight_paths = model.get_weight_paths()
# weight_paths:
# {
# 'd1.kernel': model.d1.kernel,
# 'd1.bias': model.d1.bias,
# 'd2.kernel': model.d2.kernel,
# 'd2.bias': model.d2.bias,
# }
# Functional model
inputs = tf.keras.Input((10,), batch_size=10)
x = tf.keras.layers.Dense(20, name='d1')(inputs)
output = tf.keras.layers.Dense(30, name='d2')(x)
model = tf.keras.Model(inputs, output)
d1 = model.layers[1]
d2 = model.layers[2]
weight_paths = model.get_weight_paths()
# weight_paths:
# {
# 'd1.kernel': d1.kernel,
# 'd1.bias': d1.bias,
# 'd2.kernel': d2.kernel,
# 'd2.bias': d2.bias,
# }
get_weights
get_weights()
Retrieves the weights of the model.
Returns | |
---|---|
A flat list of Numpy arrays. |
load_own_variables
load_own_variables(
store
)
Loads the state of the layer.
You can override this method to take full control of how the state of
the layer is loaded upon calling keras.models.load_model()
.
Args | |
---|---|
store
|
Dict from which the state of the model will be loaded. |
load_weights
load_weights(
*args, **kwargs
)
No-op for TensorFlow Decision Forests models.
load_weights
is not supported by TensorFlow Decision Forests models.
To save and restore a model, use the SavedModel API i.e.
model.save(...)
and tf_keras.models.load_model(...)
. To resume the
training of an existing model, create the model with
try_resume_training=True
(default value) and with a similar
temp_directory
argument. See documentation of try_resume_training
for more details.
Args | |
---|---|
*args
|
Passed through to base keras.Model implemenation.
|
**kwargs
|
Passed through to base keras.Model implemenation.
|
make_inspector
make_inspector(
index: int = 0
) -> tfdf.inspector.AbstractInspector
Creates an inspector to access the internal model structure.
Usage example:
inspector = model.make_inspector()
print(inspector.num_trees())
print(inspector.variable_importances())
Args | |
---|---|
index
|
Index of the sub-model. Only used for multitask models. |
Returns | |
---|---|
A model inspector. |
make_predict_function
make_predict_function()
Prediction of the model (!= evaluation).
make_test_function
make_test_function()
Predictions for evaluation.
make_train_function
make_train_function(
force=False
)
Creates a function that executes one step of training.
This method can be overridden to support custom training logic.
This method is called by Model.fit
and Model.train_on_batch
.
Typically, this method directly controls tf.function
and
tf.distribute.Strategy
settings, and delegates the actual training
logic to Model.train_step
.
This function is cached the first time Model.fit
or
Model.train_on_batch
is called. The cache is cleared whenever
Model.compile
is called. You can skip the cache and generate again the
function with force=True
.
Args | |
---|---|
force
|
Whether to regenerate the train function and skip the cached function if available. |
Returns | |
---|---|
Function. The function created by this method should accept a
tf.data.Iterator , and return a dict containing values that will
be passed to tf.keras.Callbacks.on_train_batch_end , such as
{'loss': 0.2, 'accuracy': 0.7} .
|
predefined_hyperparameters
@staticmethod
predefined_hyperparameters() -> List[
tfdf.keras.core.HyperParameterTemplate
]
Returns a better than default set of hyper-parameters.
They can be used directly with the hyperparameter_template
argument of the
model constructor.
These hyper-parameters outperform the default hyper-parameters (either generally or in specific scenarios). Like default hyper-parameters, existing pre-defined hyper-parameters cannot change.
predict
predict(
x,
batch_size=None,
verbose='auto',
steps=None,
callbacks=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False
)
Generates output predictions for the input samples.
Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.
For small numbers of inputs that fit in one batch,
directly use __call__()
for faster execution, e.g.,
model(x)
, or model(x, training=False)
if you have layers such as
tf.keras.layers.BatchNormalization
that behave differently during
inference. You may pair the individual model call with a tf.function
for additional performance inside your inner loop.
If you need access to numpy array values instead of tensors after your
model call, you can use tensor.numpy()
to get the numpy array value of
an eager tensor.
Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
Args | |
---|---|
x
|
Input samples. It could be:
|
batch_size
|
Integer or None .
Number of samples per batch.
If unspecified, batch_size will default to 32.
Do not specify the batch_size if your data is in the
form of dataset, generators, or keras.utils.Sequence instances
(since they generate batches).
|
verbose
|
"auto" , 0, 1, or 2. Verbosity mode.
0 = silent, 1 = progress bar, 2 = single line.
"auto" becomes 1 for most cases, and to 2 when used with
ParameterServerStrategy . Note that the progress bar is not
particularly useful when logged to a file, so verbose=2 is
recommended when not running interactively (e.g. in a production
environment). Defaults to 'auto'.
|
steps
|
Total number of steps (batches of samples)
before declaring the prediction round finished.
Ignored with the default value of None . If x is a tf.data
dataset and steps is None, predict() will
run until the input dataset is exhausted.
|
callbacks
|
List of keras.callbacks.Callback instances.
List of callbacks to apply during prediction.
See callbacks.
|
max_queue_size
|
Integer. Used for generator or
keras.utils.Sequence input only. Maximum size for the
generator queue. If unspecified, max_queue_size will default
to 10.
|
workers
|
Integer. Used for generator or keras.utils.Sequence input
only. Maximum number of processes to spin up when using
process-based threading. If unspecified, workers will default
to 1.
|
use_multiprocessing
|
Boolean. Used for generator or
keras.utils.Sequence input only. If True , use process-based
threading. If unspecified, use_multiprocessing will default to
False . Note that because this implementation relies on
multiprocessing, you should not pass non-pickleable arguments to
the generator as they can't be passed easily to children
processes.
|
See the discussion of Unpacking behavior for iterator-like inputs
for
Model.fit
. Note that Model.predict uses the same interpretation rules
as Model.fit
and Model.evaluate
, so inputs must be unambiguous for
all three methods.
Returns | |
---|---|
Numpy array(s) of predictions. |
Raises | |
---|---|
RuntimeError
|
If model.predict is wrapped in a tf.function .
|
ValueError
|
In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size. |
predict_get_leaves
predict_get_leaves(
x
)
Gets the index of the active leaf of each tree.
The active leaf is the leave that that receive the example during inference.
The returned value "leaves[i,j]" is the index of the active leave for the i-th example and the j-th tree. Leaves are indexed by depth first exploration with the negative child visited before the positive one (similarly as "iterate_on_nodes()" iteration). Leaf indices are also available with LeafNode.leaf_idx.
Args | |
---|---|
x
|
Input samples as a tf.data.Dataset. |
Returns | |
---|---|
Index of the active leaf for each tree in the model. |
predict_on_batch
predict_on_batch(
x
)
Returns predictions for a single batch of samples.
Args | |
---|---|
x
|
Input data. It could be:
|
Returns | |
---|---|
Numpy array(s) of predictions. |
Raises | |
---|---|
RuntimeError
|
If model.predict_on_batch is wrapped in a
tf.function .
|
predict_step
predict_step(
data
)
The logic for one inference step.
This method can be overridden to support custom inference logic.
This method is called by Model.make_predict_function
.
This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.
Configuration details for how this logic is run (e.g. tf.function
and tf.distribute.Strategy
settings), should be left to
Model.make_predict_function
, which can also be overridden.
Args | |
---|---|
data
|
A nested structure of Tensor s.
|
Returns | |
---|---|
The result of one inference step, typically the output of calling the
Model on data.
|
ranking_group
ranking_group() -> Optional[str]
reset_metrics
reset_metrics()
Resets the state of all the metrics in the model.
Examples:
inputs = tf.keras.layers.Input(shape=(3,))
outputs = tf.keras.layers.Dense(2)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
x = np.random.random((2, 3))
y = np.random.randint(0, 2, (2, 2))
_ = model.fit(x, y, verbose=0)
assert all(float(m.result()) for m in model.metrics)
model.reset_metrics()
assert all(float(m.result()) == 0 for m in model.metrics)
reset_states
reset_states()
save
save(
filepath: str, overwrite: Optional[bool] = True, **kwargs
)
Saves the model as a TensorFlow SavedModel.
The exported SavedModel contains a standalone Yggdrasil Decision Forests model in the "assets" sub-directory. The Yggdrasil model can be used directly using the Yggdrasil API. However, this model does not contain the "preprocessing" layer (if any).
Args | |
---|---|
filepath
|
Path to the output model. |
overwrite
|
If true, override an already existing model. If false, raise an error if a model already exist. |
**kwargs
|
Arguments passed to the core keras model's save. |
save_own_variables
save_own_variables(
store
)
Saves the state of the layer.
You can override this method to take full control of how the state of
the layer is saved upon calling model.save()
.
Args | |
---|---|
store
|
Dict where the state of the model will be saved. |
save_spec
save_spec(
dynamic_batch=True
)
Returns the tf.TensorSpec
of call args as a tuple (args, kwargs)
.
This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:
model = tf.keras.Model(...)
@tf.function
def serve(*args, **kwargs):
outputs = model(*args, **kwargs)
# Apply postprocessing steps, or add additional outputs.
...
return outputs
# arg_specs is `[tf.TensorSpec(...), ...]`. kwarg_specs, in this
# example, is an empty dict since functional models do not use keyword
# arguments.
arg_specs, kwarg_specs = model.save_spec()
model.save(path, signatures={
'serving_default': serve.get_concrete_function(*arg_specs,
**kwarg_specs)
})
Args | |
---|---|
dynamic_batch
|
Whether to set the batch sizes of all the returned
tf.TensorSpec to None . (Note that when defining functional or
Sequential models with tf.keras.Input([...], batch_size=X) , the
batch size will always be preserved). Defaults to True .
|
Returns | |
---|---|
If the model inputs are defined, returns a tuple (args, kwargs) . All
elements in args and kwargs are tf.TensorSpec .
If the model inputs are not defined, returns None .
The model inputs are automatically set when calling the model,
model.fit , model.evaluate or model.predict .
|
save_weights
save_weights(
filepath, overwrite=True, save_format=None, options=None
)
Saves all layer weights.
Either saves in HDF5 or in TensorFlow format based on the save_format
argument.
When saving in HDF5 format, the weight file has:
layer_names
(attribute), a list of strings (ordered names of model layers).- For every layer, a
group
namedlayer.name
- For every such layer group, a group attribute
weight_names
, a list of strings (ordered names of weights tensor of the layer). - For every weight in the layer, a dataset storing the weight value, named after the weight tensor.
- For every such layer group, a group attribute
When saving in TensorFlow format, all objects referenced by the network
are saved in the same format as tf.train.Checkpoint
, including any
Layer
instances or Optimizer
instances assigned to object
attributes. For networks constructed from inputs and outputs using
tf.keras.Model(inputs, outputs)
, Layer
instances used by the network
are tracked/saved automatically. For user-defined classes which inherit
from tf.keras.Model
, Layer
instances must be assigned to object
attributes, typically in the constructor. See the documentation of
tf.train.Checkpoint
and tf.keras.Model
for details.
While the formats are the same, do not mix save_weights
and
tf.train.Checkpoint
. Checkpoints saved by Model.save_weights
should
be loaded using Model.load_weights
. Checkpoints saved using
tf.train.Checkpoint.save
should be restored using the corresponding
tf.train.Checkpoint.restore
. Prefer tf.train.Checkpoint
over
save_weights
for training checkpoints.
The TensorFlow format matches objects and variables by starting at a
root object, self
for save_weights
, and greedily matching attribute
names. For Model.save
this is the Model
, and for Checkpoint.save
this is the Checkpoint
even if the Checkpoint
has a model attached.
This means saving a tf.keras.Model
using save_weights
and loading
into a tf.train.Checkpoint
with a Model
attached (or vice versa)
will not match the Model
's variables. See the
guide to training checkpoints for details on
the TensorFlow format.
Args | |
---|---|
filepath
|
String or PathLike, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the '.h5' suffix causes weights to be saved in HDF5 format. |
overwrite
|
Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. |
save_format
|
Either 'tf' or 'h5'. A filepath ending in '.h5' or
'.keras' will default to HDF5 if save_format is None .
Otherwise, None becomes 'tf'. Defaults to None .
|
options
|
Optional tf.train.CheckpointOptions object that specifies
options for saving weights.
|
Raises | |
---|---|
ImportError
|
If h5py is not available when attempting to save in
HDF5 format.
|
set_weights
set_weights(
weights
)
Sets the weights of the layer, from NumPy arrays.
The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer's weights must be instantiated before calling this function, by calling the layer.
For example, a Dense
layer returns a list of two values: the kernel
matrix and the bias vector. These can be used to set the weights of
another Dense
layer:
layer_a = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(1.))
a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
layer_a.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
layer_b = tf.keras.layers.Dense(1,
kernel_initializer=tf.constant_initializer(2.))
b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
layer_b.get_weights()
[array([[2.],
[2.],
[2.]], dtype=float32), array([0.], dtype=float32)]
layer_b.set_weights(layer_a.get_weights())
layer_b.get_weights()
[array([[1.],
[1.],
[1.]], dtype=float32), array([0.], dtype=float32)]
Args | |
---|---|
weights
|
a list of NumPy arrays. The number
of arrays and their shape must match
number of the dimensions of the weights
of the layer (i.e. it should match the
output of get_weights ).
|
Raises | |
---|---|
ValueError
|
If the provided weights list does not match the layer's specifications. |
summary
summary(
line_length=None, positions=None, print_fn=None
)
Shows information about the model.
support_distributed_training
support_distributed_training()
test_on_batch
test_on_batch(
x, y=None, sample_weight=None, reset_metrics=True, return_dict=False
)
Test the model on a single batch of samples.
Args | |
---|---|
x
|
Input data. It could be:
|
y
|
Target data. Like the input data x , it could be either Numpy
array(s) or TensorFlow tensor(s). It should be consistent with x
(you cannot have Numpy inputs and tensor targets, or inversely).
|
sample_weight
|
Optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. |
reset_metrics
|
If True , the metrics returned will be only for this
batch. If False , the metrics will be statefully accumulated
across batches.
|
return_dict
|
If True , loss and metric results are returned as a
dict, with each key being the name of the metric. If False , they
are returned as a list.
|
Returns | |
---|---|
Scalar test loss (if the model has a single output and no metrics)
or list of scalars (if the model has multiple outputs
and/or metrics). The attribute model.metrics_names will give you
the display labels for the scalar outputs.
|
Raises | |
---|---|
RuntimeError
|
If model.test_on_batch is wrapped in a
tf.function .
|
test_step
test_step(
data
)
The logic for one evaluation step.
This method can be overridden to support custom evaluation logic.
This method is called by Model.make_test_function
.
This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.
Configuration details for how this logic is run (e.g. tf.function
and tf.distribute.Strategy
settings), should be left to
Model.make_test_function
, which can also be overridden.
Args | |
---|---|
data
|
A nested structure of Tensor s.
|
Returns | |
---|---|
A dict containing values that will be passed to
tf.keras.callbacks.CallbackList.on_train_batch_end . Typically, the
values of the Model 's metrics are returned.
|
to_json
to_json(
**kwargs
)
Returns a JSON string containing the network configuration.
To load a network from a JSON save file, use
keras.models.model_from_json(json_string, custom_objects={})
.
Args | |
---|---|
**kwargs
|
Additional keyword arguments to be passed to
*json.dumps() .
|
Returns | |
---|---|
A JSON string. |
to_yaml
to_yaml(
**kwargs
)
Returns a yaml string containing the network configuration.
To load a network from a yaml save file, use
keras.models.model_from_yaml(yaml_string, custom_objects={})
.
custom_objects
should be a dictionary mapping
the names of custom losses / layers / etc to the corresponding
functions / classes.
Args | |
---|---|
**kwargs
|
Additional keyword arguments
to be passed to yaml.dump() .
|
Returns | |
---|---|
A YAML string. |
Raises | |
---|---|
RuntimeError
|
announces that the method poses a security risk |
train_on_batch
train_on_batch(
*args, **kwargs
)
No supported for Tensorflow Decision Forests models.
Decision forests are not trained in batches the same way neural networks are. To avoid confusion, train_on_batch is disabled.
Args | |
---|---|
*args
|
Ignored |
**kwargs
|
Ignored. |
train_step
train_step(
data
)
Collects training examples.
uplift_treatment
uplift_treatment() -> Optional[str]
valid_step
valid_step(
data
)
Collects validation examples.
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
yggdrasil_model_path_tensor
yggdrasil_model_path_tensor(
multitask_model_index: int = 0
) -> Optional[tf.Tensor]
Gets the path to yggdrasil model, if available.
The effective path can be obtained with:
yggdrasil_model_path_tensor().numpy().decode("utf-8")
Args | |
---|---|
multitask_model_index
|
Index of the sub-model. Only used for multitask models. |
Returns | |
---|---|
Path to the Yggdrasil model. |
yggdrasil_model_prefix
yggdrasil_model_prefix(
index: int = 0
) -> str
Gets the prefix of the internal yggdrasil model.
__call__
__call__(
*args, **kwargs
)