View source on GitHub |
Interpolate signal using polyharmonic interpolation.
tfa.image.interpolate_spline(
train_points: tfa.types.TensorLike
,
train_values: tfa.types.TensorLike
,
query_points: tfa.types.TensorLike
,
order: int,
regularization_weight: tfa.types.FloatTensorLike
= 0.0,
name: str = 'interpolate_spline'
) -> tf.Tensor
The interpolant has the form
\[f(x) = \sum_{i = 1}^n w_i \phi(||x - c_i||) + v^T x + b.\]
This is a sum of two terms: (1) a weighted sum of radial basis function (RBF) terms, with the centers \(c_1, ... c_n\), and (2) a linear term with a bias. The \(c_i\) vectors are 'training' points. In the code, b is absorbed into v by appending 1 as a final dimension to x. The coefficients w and v are estimated such that the interpolant exactly fits the value of the function at the \(c_i\) points, the vector w is orthogonal to each \(c_i\), and the vector w sums to 0. With these constraints, the coefficients can be obtained by solving a linear system.
\(\phi\) is an RBF, parametrized by an interpolation order. Using order=2 produces the well-known thin-plate spline.
We also provide the option to perform regularized interpolation. Here, the interpolant is selected to trade off between the squared loss on the training data and a certain measure of its curvature (details). Using a regularization weight greater than zero has the effect that the interpolant will no longer exactly fit the training data. However, it may be less vulnerable to overfitting, particularly for high-order interpolation.
Note the interpolation procedure is differentiable with respect to all inputs besides the order parameter.
We support dynamically-shaped inputs, where batch_size, n, and m are None at graph construction time. However, d and k must be known.
Returns | |
---|---|
[b, m, k] float Tensor of query values. We use train_points and
train_values to perform polyharmonic interpolation. The query values are
the values of the interpolant evaluated at the locations specified in
query_points.
|