tfc.entropy_models.LaplaceEntropyModel

Entropy model for Laplace distributed random variables.

This entropy model handles quantization and compression of a bottleneck tensor and implements a penalty that encourages compressibility under the Rice code.

Given a signed integer, run_length_encode encodes zeros using a run-length code, the sign using a uniform bit, and applies the Rice code to the magnitude.

The penalty applied by this class is given by:

l1 * reduce_sum(abs(x))

This encourages x to follow a symmetrized laplace distribution.

coding_rank Integer. Number of innermost dimensions considered a coding unit. Each coding unit is compressed to its own bit string, and the estimated rate is summed over each coding unit in bits().
l1 Float. L1 regularization factor.
run_length_code Int. Rice code if >= 0 else Gamma code.
magnitude_code Int. Rice code if >= 0 else Gamma code.
use_run_length_for_non_zeros Bool. Whether to encode nonzero run lengths.
bottleneck_dtype tf.dtypes.DType. Data type of bottleneck tensor. Defaults to tf.keras.mixed_precision.global_policy().compute_dtype.

bottleneck_dtype Data type of the bottleneck tensor.
coding_rank Number of innermost dimensions considered a coding unit.
l1 L1 parameter.
magnitude_code magnitude_code parameter.
name Returns the name of this module as passed or determined in the ctor.

name_scope Returns a tf.name_scope instance for this class.
non_trainable_variables Sequence of non-trainable variables owned by this module and its submodules.
run_length_code run_length_code parameter.
submodules Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
list(a.submodules) == [b, c]
True
list(b.submodules) == [c]
True
list(c.submodules) == []
True

trainable_variables Sequence of trainable variables owned by this module and its submodules.

use_run_length_for_non_zeros use_run_length_for_non_zeros parameter.
variables Sequence of variables owned by this module and its submodules.

Methods

compress

View source

Compresses a floating-point tensor.

Compresses the tensor to bit strings. bottleneck is first quantized as in quantize(), and then compressed using the run-length rice code. The quantized tensor can later be recovered by calling decompress().

The innermost self.coding_rank dimensions are treated as one coding unit, i.e. are compressed into one string each. Any additional dimensions to the left are treated as batch dimensions.

Args
bottleneck tf.Tensor containing the data to be compressed. Must have at least self.coding_rank dimensions.

Returns
A tf.Tensor having the same shape as bottleneck without the self.coding_rank innermost dimensions, containing a string for each coding unit.

decode_fn

View source

decompress

View source

Decompresses a tensor.

Reconstructs the quantized tensor from bit strings produced by compress().

Args
strings tf.Tensor containing the compressed bit strings.
code_shape Shape of innermost dimensions of the output tf.Tensor.

Returns
A tf.Tensor of shape tf.shape(strings) + code_shape.

encode_fn

View source

penalty

View source

Computes penalty encouraging compressibility.

Args
bottleneck tf.Tensor containing the data to be compressed. Must have at least self.coding_rank dimensions.

Returns
Penalty value, which has the same shape as bottleneck without the self.coding_rank innermost dimensions.

quantize

View source

Quantizes a floating-point bottleneck tensor.

The tensor is rounded to integer values. The gradient of this rounding operation is overridden with the identity (straight-through gradient estimator).

Args
bottleneck tf.Tensor containing the data to be quantized.

Returns
A tf.Tensor containing the quantized values.

with_name_scope

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable &#x27;my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>

Args
method The method to wrap.

Returns
The original method wrapped such that it enters the module's name scope.

__call__

View source

Perturbs a tensor with (quantization) noise and computes penalty.

Args
bottleneck tf.Tensor containing the data to be compressed. Must have at least self.coding_rank dimensions.

Returns
A tuple (self.quantize(bottleneck), self.penalty(bottleneck)).