View source on GitHub |
Config for calibrated lattice model.
tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=None,
lattices='random',
num_lattices=None,
lattice_rank=None,
interpolation='hypercube',
parameterization='all_vertices',
num_terms=2,
separate_calibrators=True,
use_linear_combination=False,
use_bias=False,
regularizer_configs=None,
output_min=None,
output_max=None,
output_calibration=False,
output_calibration_num_keypoints=10,
output_initialization='quantiles',
output_calibration_input_keypoints_type='fixed',
fix_ensemble_for_2d_constraints=True,
random_seed=0
)
Used in the notebooks
Used in the tutorials |
---|
A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration.
The ensemble structure can be one of the following and set via the lattice flag:
- Expliclit list of list of features specifying features used in each submodel.
- A random arrangement (also called Random Tiny Lattices, or RTL).
- Crystals growing algorithm: This algorithm first constructs a prefitting model to assess pairwise interactions between features, and then uses those estimates to construct a final model that puts interacting features in the same lattice. For details see "Fast and flexible monotonic functions with ensembles of lattices", Advances in Neural Information Processing Systems, 2016.
Examples:
Creating a random ensemble (RTL) model:
model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
num_lattices=6, # number of lattices
lattice_rank=5, # number of features in each lattice
feature_configs=[...],
)
feature_analysis_input_fn = create_input_fn(num_epochs=1, ...)
train_input_fn = create_input_fn(num_epochs=100, ...)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn)
estimator.train(input_fn=train_input_fn)
You can also construct a random ensemble (RTL) using a tfl.layers.RTL
layer so long as all features have the same lattice size:
model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
lattices='rtl_layer',
num_lattices=6, # number of lattices
lattice_rank=5, # number of features in each lattice
feature_configs=[...],
)
feature_analysis_input_fn = create_input_fn(num_epochs=1, ...)
train_input_fn = create_input_fn(num_epochs=100, ...)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn)
estimator.train(input_fn=train_input_fn)
To create a Crystals model, you will need to provide a prefitting_input_fn to the estimator constructor. This input_fn is used to train the prefitting model, as described above. The prefitting model does not need to be fully trained, so a few epochs should be enough.
model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
lattices='crystals', # feature arrangement method
num_lattices=6, # number of lattices
lattice_rank=5, # number of features in each lattice
feature_configs=[...],
)
feature_analysis_input_fn = create_input_fn(num_epochs=1, ...)
prefitting_input_fn = create_input_fn(num_epochs=5, ...)
train_input_fn = create_input_fn(num_epochs=100, ...)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn
prefitting_input_fn=prefitting_input_fn)
estimator.train(input_fn=train_input_fn)
Args | |
---|---|
feature_configs
|
A list of tfl.configs.FeatureConfig instances that
specify configurations for each feature. If a configuration is not
provided for a feature, a default configuration will be used.
|
lattices
|
Should be one of the following:
|
num_lattices
|
Number of lattices in the ensemble. Must be provided if lattices are not explicitly provided. |
lattice_rank
|
Number of features in each lattice. Must be provided if lattices are not explicitly provided. |
interpolation
|
One of 'hypercube' or 'simplex' interpolation. For a
d-dimensional lattice, 'hypercube' interpolates 2^d parameters, whereas
'simplex' uses d+1 parameters and thus scales better. For details see
tfl.lattice_lib.evaluate_with_simplex_interpolation and
tfl.lattice_lib.evaluate_with_hypercube_interpolation .
|
parameterization
|
The parameterization of the lattice function class to
use. A lattice function is uniquely determined by specifying its value
on every lattice vertex. A parameterization scheme is a mapping from a
vector of parameters to a multidimensional array of lattice vertex
values. It can be one of:
|
num_terms
|
The number of terms in a lattice using 'kronecker_factored'
parameterization. Ignored if parameterization is set to
'all_vertices' .
|
separate_calibrators
|
If features should be separately calibrated for each lattice in the ensemble. |
use_linear_combination
|
If set to true, a linear combination layer will be used to combine ensemble outputs. Otherwise an averaging layer will be used. If output is bounded or output calibration is used, then this layer will be a weighted average. |
use_bias
|
If a bias term should be used for the linear combination. |
regularizer_configs
|
A list of tfl.configs.RegularizerConfig instances
that apply global regularization.
|
output_min
|
Lower bound constraint on the output of the model. |
output_max
|
Upper bound constraint on the output of the model. |
output_calibration
|
If a piecewise-linear calibration should be used on the output of the lattice. |
output_calibration_num_keypoints
|
Number of keypoints to use for the output piecewise-linear calibration. |
output_initialization
|
The initial values to setup for the output of the
model. When using output calibration, these values are used to
initialize the output keypoints of the output piecewise-linear
calibration. Otherwise the lattice parameters will be setup to form a
linear function in the range of output_initialization. It can be one of:
'quantiles' : Output is initliazed to label quantiles, if
possible.'uniform' : Output is initliazed uniformly in label range. |
output_calibration_input_keypoints_type
|
One of "fixed" or
"learned_interior". If "learned_interior", keypoints are initialized to
the values in pwl_calibration_input_keypoints but then allowed to vary
during training, with the exception of the first and last keypoint
location which are fixed.
|
fix_ensemble_for_2d_constraints
|
A boolean indicating whether to add missing features to some lattices to resolve potential 2d constraint violations which require lattices from ensemble to either contain both constrained features or none of them, e.g. trapezoid trust constraint requires a lattice that has the "conditional" feature to include the "main" feature. Note that this might increase the final lattice rank. |
random_seed
|
Random seed to use for randomized lattices. |
Methods
deserialize_nested_configs
@classmethod
deserialize_nested_configs( config, custom_objects=None )
Returns a deserialized configuration dictionary.
feature_config_by_name
feature_config_by_name(
feature_name
)
Returns existing or default FeatureConfig with the given name.
from_config
@classmethod
from_config( config, custom_objects=None )
get_config
get_config()
Returns a configuration dictionary.
regularizer_config_by_name
regularizer_config_by_name(
regularizer_name
)
Returns existing or default RegularizerConfig with the given name.