View source on GitHub |
The SchurComplement kernel.
Inherits From: AutoCompositeTensorPsdKernel
, PositiveSemidefiniteKernel
tfp.substrates.jax.math.psd_kernels.SchurComplement(
base_kernel,
fixed_inputs,
fixed_inputs_is_missing=None,
diag_shift=None,
cholesky_fn=None,
validate_args=False,
name='SchurComplement',
_precomputed_divisor_matrix_cholesky=None
)
Given a block matrix M = [[A, B], [C, D]]
, the Schur complement of D in M is
written M / D = A - B @ Inverse(D) @ C
.
This class represents a PositiveSemidefiniteKernel whose behavior is as
follows. We compute a matrix, analogous to D
in the above definition, by
calling base_kernel.matrix(fixed_inputs, fixed_inputs)
. Then given new input
locations x
and y
, we can construct the remaining pieces of M
above, and
compute the Schur complement of D
in M
(see Mathematical Details, below).
Notably, this kernel uses a bijector (Invert(CholeskyOuterProduct)), as an intermediary for the requisite matrix solve, which means we get a caching benefit after the first use.
Mathematical Details
Suppose we have a kernel k
and some fixed collection of inputs
Z = [z0, z1, ..., zN]
. Given new inputs x
and y
, we can form a block
matrix
M = [
[k(x, y), k(x, z0), ..., k(x, zN)],
[k(z0, y), k(z0, z0), ..., k(z0, zN)],
...,
[k(zN, y), k(z0, zN), ..., k(zN, zN)],
]
We might write this, so as to emphasize the block structure,
M = [
[xy, xZ],
[yZ^T, ZZ],
],
xy = [k(x, y)]
xZ = [k(x, z0), ..., k(x, zN)]
yZ = [k(y, z0), ..., k(y, zN)]
ZZ = "the matrix of k(zi, zj)'s"
Then we have the definition of this kernel's apply method:
schur_comp.apply(x, y) = xy - xZ @ ZZ^{-1} @ yZ^T
and similarly, if x and y are collections of inputs.
As with other PSDKernels, the apply
method acts as a (possibly
vectorized) scalar function of 2 inputs. Given a single x
and y
,
apply
will yield a scalar output. Given two (equal size!) collections X
and Y
, it will yield another (equal size!) collection of scalar outputs.
Examples
Here's a simple example usage, with no particular motivation.
from tensorflow_probability.math import psd_kernels
base_kernel = psd_kernels.ExponentiatedQuadratic(amplitude=np.float64(1.))
# 3 points in 1-dimensional space (shape [3, 1]).
z = [[0.], [3.], [4.]]
schur_kernel = psd_kernels.SchurComplement(
base_kernel=base_kernel,
fixed_inputs=z)
# Two individual 1-d points
x = [1.]
y = [2.]
print(schur_kernel.apply(x, y))
# ==> k(x, y) - k(x, z) @ Inverse(k(z, z)) @ k(z, y)
A more motivating application of this kernel is in constructing a Gaussian process that is conditioned on some observed data.
from tensorflow_probability import distributions as tfd
from tensorflow_probability.math import psd_kernels
base_kernel = psd_kernels.ExponentiatedQuadratic(amplitude=np.float64(1.))
observation_index_points = np.random.uniform(-1., 1., [50, 1])
observations = np.sin(2 * np.pi * observation_index_points[..., 0])
posterior_kernel = psd_kernels.SchurComplement(
base_kernel=base_kernel,
fixed_inputs=observation_index_points)
# Assume we use a zero prior mean, and compute the posterior mean.
def posterior_mean_fn(x):
k_x_obs_linop = tf.linalg.LinearOperatorFullMatrix(
base_kernel.matrix(x, observation_index_points))
chol_linop = tf.linalg.LinearOperatorLowerTriangular(
posterior_kernel.divisor_matrix_cholesky())
return k_x_obs_linop.matvec(
chol_linop.solvevec(
chol_linop.solvevec(observations),
adjoint=True))
# Construct the GP posterior distribution at some new points.
gp_posterior = tfp.distributions.GaussianProcess(
index_points=np.linspace(-1., 1., 100)[..., np.newaxis],
kernel=posterior_kernel,
mean_fn=posterior_mean_fn)
# Draw 5 samples on the above 100-point grid
samples = gp_posterior.sample(5)
Methods
apply
apply(
x1, x2, example_ndims=0, name='apply'
)
Apply the kernel function pairs of inputs.
Args | |
---|---|
x1
|
(Nested) Tensor input to the kernel, of shape B1 + E1 + F , where
B1 and E1 may be empty (ie, no batch/example dims, resp.). If
nested, B1 and E1 must broadcast across elements of the
structure. F (the feature shape) must have rank equal to the
kernel's feature_ndims property, or to the corresponding element of
the feature_ndims nested structure. Batch shape must broadcast with
the batch shape of x2 and with the kernel's batch shape. Example
shape must broadcast with example shape of x2 . x1 and x2 must
have the same number of example dims (ie, same rank).
|
x2
|
(Nested) Tensor input to the kernel, of shape B2 + E2 + F , where
B2 and E2 may be empty (ie, no batch/example dims, resp.). If
nested, B1 and E1 must broadcast across elements of the
structure. F (the feature shape) must have rank equal to the
kernel's feature_ndims property, or to the corresponding element of
the feature_ndims nested structure. Batch shape must broadcast with
the batch shape of x2 and with the kernel's batch shape. Example
shape must broadcast with example shape of x2 . x1 and x2 must
have the same number of example dims (ie, same rank).
|
example_ndims
|
A python integer, the number of example dims in the inputs. In essence, this parameter controls how broadcasting of the kernel's batch shape with input batch shapes works. The kernel batch shape will be broadcast against everything to the left of the combined example and feature dimensions in the input shapes. |
name
|
name to give to the op |
Returns | |
---|---|
Tensor containing the results of applying the kernel function to inputs
x1 and x2 . If the kernel parameters' batch shape is Bk then the
shape of the Tensor resulting from this method call is
broadcast(Bk, B1, B2) + broadcast(E1, E2) .
|
Given an index set S
, a kernel function is mathematically defined as a
real- or complex-valued function on S
satisfying the
positive semi-definiteness constraint:
sum_i sum_j (c[i]*) c[j] k(x[i], x[j]) >= 0
for any finite collections {x[1], ..., x[N]}
in S
and
{c[1], ..., c[N]}
in the reals (or the complex plane). '*' is the complex
conjugate, in the complex case.
This method most closely resembles the function described in the
mathematical definition of a kernel. Given a PositiveSemidefiniteKernel k
with scalar parameters and inputs x
and y
in S
, apply(x, y)
yields a
single scalar value.
Examples
import tensorflow_probability as tfp; tfp = tfp.substrates.jax
# Suppose `SomeKernel` acts on vectors (rank-1 tensors)
scalar_kernel = tfp.math.psd_kernels.SomeKernel(param=.5)
scalar_kernel.batch_shape
# ==> []
# `x` and `y` are batches of five 3-D vectors:
x = np.ones([5, 3], np.float32)
y = np.ones([5, 3], np.float32)
scalar_kernel.apply(x, y).shape
# ==> [5]
The above output is the result of vectorized computation of the five values
[k(x[0], y[0]), k(x[1], y[1]), ..., k(x[4], y[4])]
Now we can consider a kernel with batched parameters:
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[.2, .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.apply(x, y).shape
# ==> Error! [2] and [5] can't broadcast.
The parameter batch shape of [2]
and the input batch shape of [5]
can't
be broadcast together. We can fix this in either of two ways:
Fix #1
Give the parameter a shape of [2, 1]
which will correctly broadcast with
[5]
to yield [2, 5]
:
batch_kernel = tfp.math.psd_kernels.SomeKernel(
param=[[.2], [.5]])
batch_kernel.batch_shape
# ==> [2, 1]
batch_kernel.apply(x, y).shape
# ==> [2, 5]
Fix #2
By specifying example_ndims
, which tells the kernel to treat the 5
in
the input shape as part of the "example shape", and "pushing" the kernel
batch shape to the left:
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[.2, .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.apply(x, y, example_ndims=1).shape
# ==> [2, 5]
batch_shape_tensor
batch_shape_tensor(
name='batch_shape_tensor'
)
Shape of a single sample from a single event index as a 1-D Tensor
.
The batch dimensions are indexes into independent, non-identical
parameterizations of this PositiveSemidefiniteKernel
.
Args | |
---|---|
name
|
name to give to the op |
Returns | |
---|---|
batch_shape
|
Tensor .
|
copy
copy(
**override_parameters_kwargs
)
Creates a copy of the kernel.
Args | |
---|---|
**override_parameters_kwargs
|
String/value dictionary of initialization arguments to override with new values. |
Returns | |
---|---|
copied_kernel
|
A new instance of type(self) initialized from the union
of self.parameters and override_parameters_kwargs, i.e.,
dict(self.parameters, **override_parameters_kwargs) .
|
divisor_matrix
divisor_matrix()
divisor_matrix_cholesky
divisor_matrix_cholesky(
fixed_inputs=None, fixed_inputs_is_missing=None
)
matrix
matrix(
x1, x2, name='matrix'
)
Construct (batched) matrices from (batches of) collections of inputs.
Args | |
---|---|
x1
|
(Nested) Tensor input to the first positional parameter of the
kernel, of shape B1 + [e1] + F , where B1 may be empty (ie, no
batch dims, resp.), e1 is a single integer (ie, x1 has example
ndims exactly 1), and F (the feature shape) must have rank equal to
the kernel's feature_ndims property (or to the corresponding
element of feature_ndims , if nested). Batch shape must broadcast
with the batch shape of x2 and with the kernel's batch shape.
|
x2
|
(Nested) Tensor input to the second positional parameter of the
kernel, shape B2 + [e2] + F , where B2 may be empty (ie, no batch
dims, resp.), e2 is a single integer (ie, x2 has example ndims
exactly 1), and F (the feature shape) must have rank equal to the
kernel's feature_ndims property (or to the corresponding element of
feature_ndims , if nested). Batch shape must broadcast with the
batch shape of x1 and with the kernel's batch shape.
|
name
|
name to give to the op |
Returns | |
---|---|
Tensor containing the matrix (possibly batched) of kernel applications
to pairs from inputs x1 and x2 . If the kernel parameters' batch shape
is Bk then the shape of the Tensor resulting from this method call is
broadcast(Bk, B1, B2) + [e1, e2] (note this differs from apply : the
example dimensions are concatenated, whereas in apply the example dims
are broadcast together).
|
Given inputs x1
and x2
of shapes
[b1, ..., bB, e1, f1, ..., fF]
and
[c1, ..., cC, e2, f1, ..., fF]
This method computes the batch of e1 x e2
matrices resulting from applying
the kernel function to all pairs of inputs from x1
and x2
. The shape
of the batch of matrices is the result of broadcasting the batch shapes of
x1
, x2
, and the kernel parameters (see examples below). As such, it's
required that these shapes all be broadcast compatible. However, the kernel
parameter batch shapes need not broadcast against the 'example shapes' (e1
and e2
above).
When the two inputs are the (batches of) identical collections, the resulting matrix is the so-called Gram (or Gramian) matrix (https://en.wikipedia.org/wiki/Gramian_matrix).
Examples
First, consider a kernel with a single scalar parameter.
import tensorflow_probability as tfp; tfp = tfp.substrates.jax
scalar_kernel = tfp.math.psd_kernels.SomeKernel(param=.5)
scalar_kernel.batch_shape
# ==> []
# Our inputs are two lists of 3-D vectors
x = np.ones([5, 3], np.float32)
y = np.ones([4, 3], np.float32)
scalar_kernel.matrix(x, y).shape
# ==> [5, 4]
The result comes from applying the kernel to the entries in x
and y
pairwise, across all pairs:
| k(x[0], y[0]) k(x[0], y[1]) ... k(x[0], y[3]) |
| k(x[1], y[0]) k(x[1], y[1]) ... k(x[1], y[3]) |
| ... ... ... |
| k(x[4], y[0]) k(x[4], y[1]) ... k(x[4], y[3]) |
Now consider a kernel with batched parameters with the same inputs
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.matrix(x, y).shape
# ==> [2, 5, 4]
This results in a batch of 2 matrices, one computed from the kernel with
param = 1.
and the other with param = .5
.
We also support batching of the inputs. First, let's look at that with the scalar kernel again.
# Batch of 10 lists of 5 vectors of dimension 3
x = np.ones([10, 5, 3], np.float32)
# Batch of 10 lists of 4 vectors of dimension 3
y = np.ones([10, 4, 3], np.float32)
scalar_kernel.matrix(x, y).shape
# ==> [10, 5, 4]
The result is a batch of 10 matrices built from the batch of 10 lists of input vectors. These batch shapes have to be broadcastable. The following will not work:
x = np.ones([10, 5, 3], np.float32)
y = np.ones([20, 4, 3], np.float32)
scalar_kernel.matrix(x, y).shape
# ==> Error! [10] and [20] can't broadcast.
Now let's consider batches of inputs in conjunction with batches of kernel parameters. We require that the input batch shapes be broadcastable with the kernel parameter batch shapes, otherwise we get an error:
x = np.ones([10, 5, 3], np.float32)
y = np.ones([10, 4, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.matrix(x, y).shape
# ==> Error! [2] and [10] can't broadcast.
The fix is to make the kernel parameter shape broadcastable with [10]
(or
reshape the inputs to be broadcastable!):
x = np.ones([10, 5, 3], np.float32)
y = np.ones([10, 4, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[[1.], [.5]])
batch_kernel.batch_shape
# ==> [2, 1]
batch_kernel.matrix(x, y).shape
# ==> [2, 10, 5, 4]
# Or, make the inputs broadcastable:
x = np.ones([10, 1, 5, 3], np.float32)
y = np.ones([10, 1, 4, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.matrix(x, y).shape
# ==> [10, 2, 5, 4]
Here, we have the result of applying the kernel, with 2 different parameters, to each of a batch of 10 pairs of input lists.
parameter_properties
@classmethod
parameter_properties( dtype=tf.float32 )
Returns a dict mapping constructor arg names to property annotations.
This dict should include an entry for each of the kernel's
Tensor
-valued constructor arguments.
Args | |
---|---|
dtype
|
Optional float dtype to assume for continuous-valued parameters.
Some constraining bijectors require advance knowledge of the dtype
because certain constants (e.g., tfb.Softplus.low ) must be
instantiated with the same dtype as the values to be transformed.
|
Returns | |
---|---|
parameter_properties
|
A
str -> tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names to ParameterProperties`
instances.
|
tensor
tensor(
x1, x2, x1_example_ndims, x2_example_ndims, name='tensor'
)
Construct (batched) tensors from (batches of) collections of inputs.
Args | |
---|---|
x1
|
(Nested) Tensor input to the first positional parameter of the
kernel, of shape B1 + E1 + F , where B1 and E1 arbitrary shapes
which may be empty (ie, no batch/example dims, resp.), and F (the
feature shape) must have rank equal to the kernel's feature_ndims
property (or to the corresponding element of feature_ndims , if
nested). Batch shape must broadcast with the batch shape of x2 and
with the kernel's batch shape.
|
x2
|
(Nested) Tensor input to the second positional parameter of the
kernel, shape B2 + E2 + F , where B2 and E2 arbitrary shapes
which may be empty (ie, no batch/example dims, resp.), and F (the
feature shape) must have rank equal to the kernel's feature_ndims
property (or to the corresponding element of feature_ndims , if
nested). Batch shape must broadcast with the batch shape of x1 and
with the kernel's batch shape.
|
x1_example_ndims
|
A python integer greater than or equal to 0, the number
of example dims in the first input. This affects both the alignment of
batch shapes and the shape of the final output of the function.
Everything left of the feature shape and the example dims in x1 is
considered "batch shape", and must broadcast as specified above.
|
x2_example_ndims
|
A python integer greater than or equal to 0, the number
of example dims in the second input. This affects both the alignment of
batch shapes and the shape of the final output of the function.
Everything left of the feature shape and the example dims in x1 is
considered "batch shape", and must broadcast as specified above.
|
name
|
name to give to the op |
Returns | |
---|---|
Tensor containing (possibly batched) kernel applications to pairs from
inputs x1 and x2 . If the kernel parameters' batch shape is Bk then
the shape of the Tensor resulting from this method call is
broadcast(Bk, B1, B2) + E1 + E2 . Note this differs from apply : the
example dimensions are concatenated, whereas in apply the example dims
are broadcast together. It also differs from matrix : the example shapes
are arbitrary here, and the result accrues a rank equal to the sum of the
ranks of the input example shapes.
|
Examples
First, consider a kernel with a single scalar parameter.
import tensorflow_probability as tfp; tfp = tfp.substrates.jax
scalar_kernel = tfp.math.psd_kernels.SomeKernel(param=.5)
scalar_kernel.batch_shape
# ==> []
# Our inputs are two rank-2 collections of 3-D vectors
x = np.ones([5, 6, 3], np.float32)
y = np.ones([7, 8, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [5, 6, 7, 8]
# Empty example shapes work too!
x = np.ones([3], np.float32)
y = np.ones([5, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=0, x2_example_ndims=1).shape
# ==> [5]
The result comes from applying the kernel to the entries in x
and y
pairwise, across all pairs:
| k(x[0], y[0]) k(x[0], y[1]) ... k(x[0], y[3]) |
| k(x[1], y[0]) k(x[1], y[1]) ... k(x[1], y[3]) |
| ... ... ... |
| k(x[4], y[0]) k(x[4], y[1]) ... k(x[4], y[3]) |
Now consider a kernel with batched parameters.
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[1., .5])
batch_kernel.batch_shape
# ==> [2]
# Inputs are two rank-2 collections of 3-D vectors
x = np.ones([5, 6, 3], np.float32)
y = np.ones([7, 8, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [2, 5, 6, 7, 8]
We also support batching of the inputs. First, let's look at that with the scalar kernel again.
# Batch of 10 lists of 5x6 collections of dimension 3
x = np.ones([10, 5, 6, 3], np.float32)
# Batch of 10 lists of 7x8 collections of dimension 3
y = np.ones([10, 7, 8, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [10, 5, 6, 7, 8]
The result is a batch of 10 tensors built from the batch of 10 rank-2 collections of input vectors. The batch shapes have to be broadcastable. The following will not work:
x = np.ones([10, 5, 3], np.float32)
y = np.ones([20, 4, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=1, x2_example_ndims=1).shape
# ==> Error! [10] and [20] can't broadcast.
Now let's consider batches of inputs in conjunction with batches of kernel parameters. We require that the input batch shapes be broadcastable with the kernel parameter batch shapes, otherwise we get an error:
x = np.ones([10, 5, 6, 3], np.float32)
y = np.ones([10, 7, 8, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> Error! [2] and [10] can't broadcast.
The fix is to make the kernel parameter shape broadcastable with [10]
(or
reshape the inputs to be broadcastable!):
x = np.ones([10, 5, 6, 3], np.float32)
y = np.ones([10, 7, 8, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[[1.], [.5]])
batch_kernel.batch_shape
# ==> [2, 1]
batch_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [2, 10, 5, 6, 7, 8]
# Or, make the inputs broadcastable:
x = np.ones([10, 1, 5, 6, 3], np.float32)
y = np.ones([10, 1, 7, 8, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [10, 2, 5, 6, 7, 8]
with_precomputed_divisor
@staticmethod
with_precomputed_divisor( base_kernel, fixed_inputs, fixed_inputs_is_missing=None, diag_shift=None, cholesky_fn=None, validate_args=False, name='PrecomputedSchurComplement', _precomputed_divisor_matrix_cholesky=None )
Returns a SchurComplement
with a precomputed divisor matrix.
This method is the same as creating a SchurComplement
kernel, but assumes
that fixed_inputs
, diag_shift
and base_kernel
are unchanging /
not parameterized by any mutable state. We explicitly read / concretize
these values when this method is called, since we can precompute some
factorizations in order to speed up subsequent invocations of the kernel.
Args | |
---|---|
base_kernel
|
A PositiveSemidefiniteKernel instance, the kernel used to
build the block matrices of which this kernel computes the Schur
complement.
|
fixed_inputs
|
A (nested) Tensor, representing a collection of inputs. The
Schur complement that this kernel computes comes from a block matrix,
whose bottom-right corner is derived from
base_kernel.matrix(fixed_inputs, fixed_inputs) , and whose top-right
and bottom-left pieces are constructed by computing the base_kernel
at pairs of input locations together with these fixed_inputs .
fixed_inputs is allowed to be an empty collection (either None or
having a zero shape entry), in which case the kernel falls back to
the trivial application of base_kernel to inputs. See class-level
docstring for more details on the exact computation this does;
fixed_inputs correspond to the Z structure discussed there.
fixed_inputs (or each of its nested components) is assumed to have
shape [b1, ..., bB, N, f1, ..., fF] where the b 's are batch shape
entries, the f 's are feature_shape entries, and N is the number
of fixed inputs. Use of this kernel entails a 1-time O(N^3) cost of
computing the Cholesky decomposition of the k(Z, Z) matrix. The batch
shape elements of fixed_inputs must be broadcast compatible with
base_kernel.batch_shape .
|
fixed_inputs_is_missing
|
A boolean Tensor of shape [..., N] . When
is_missing is not None and an element of is_missing is True , the
returned kernel will return values computed as if the divisor matrix
did not contain the corresponding row or column.
|
diag_shift
|
A floating point scalar to be added to the diagonal of the divisor_matrix before computing its Cholesky. |
cholesky_fn
|
Callable which takes a single (batch) matrix argument and
returns a Cholesky-like lower triangular factor. Default value: None ,
in which case make_cholesky_with_jitter_fn is used with the jitter
parameter.
|
validate_args
|
If True , parameters are checked for validity despite
possibly degrading runtime performance.
Default value: False
|
name
|
Python str name prefixed to Ops created by this class.
Default value: "PrecomputedSchurComplement"
|
_precomputed_divisor_matrix_cholesky
|
Internal arg -- do not use. |
__add__
__add__(
k
)
__getitem__
__getitem__(
slices
)
Slices the batch axes of this kernel, returning a new instance.
k = tfpk.ExponentiatedQuadratic(
amplitude=tf.ones([3, 5, 7, 9]),
length_scale=tf.ones([3, 5, 7, 9]))
k.batch_shape # => [3, 5, 7, 9]
k2 = k[:, tf.newaxis, ..., -2:, 1::2]
k2.batch_shape # => [3, 1, 5, 2, 4]
Args | |
---|---|
slices
|
slices from the [] operator |
Returns | |
---|---|
dist
|
A new PositiveSemidefiniteKernel instance with sliced parameters.
|
__iter__
__iter__()
__mul__
__mul__(
k
)
__radd__
__radd__(
lhs
)