View source on GitHub |
Interface that defines how to specify gradients for a quantum circuit.
Used in the notebooks
Used in the tutorials |
---|
This abstract class allows for the creation of gradient calculation procedures for (expectation values from) quantum circuits, with respect to a set of input parameter values. This allows one to backpropagate through a quantum circuit.
Methods
differentiate_analytic
@tf.function
differentiate_analytic( programs, symbol_names, symbol_values, pauli_sums, forward_pass_vals, grad )
Differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. differentiate_analytic
calls he inheriting differentiator's get_gradient_circuits
and uses
those components to construct the gradient.
Args | |
---|---|
programs
|
tf.Tensor of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
|
symbol_names
|
tf.Tensor of strings with shape [n_params], which
is used to specify the order in which the values in
symbol_values should be placed inside of the circuits in
programs .
|
symbol_values
|
tf.Tensor of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by symbol_names .
|
pauli_sums
|
tf.Tensor of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
|
forward_pass_vals
|
tf.Tensor of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
|
grad
|
tf.Tensor of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
|
Returns | |
---|---|
A tf.Tensor with the same shape as symbol_values representing
the gradient backpropageted to the symbol_values input of the op
you are differentiating through.
|
differentiate_sampled
@tf.function
differentiate_sampled( programs, symbol_names, symbol_values, pauli_sums, num_samples, forward_pass_vals, grad )
Differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. differentiate_sampled
calls he inheriting differentiator's get_gradient_circuits
and uses
those components to construct the gradient.
Args | |
---|---|
programs
|
tf.Tensor of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
|
symbol_names
|
tf.Tensor of strings with shape [n_params], which
is used to specify the order in which the values in
symbol_values should be placed inside of the circuits in
programs .
|
symbol_values
|
tf.Tensor of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by symbol_names .
|
pauli_sums
|
tf.Tensor of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
|
num_samples
|
tf.Tensor of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
|
forward_pass_vals
|
tf.Tensor of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
|
grad
|
tf.Tensor of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
|
Returns | |
---|---|
A tf.Tensor with the same shape as symbol_values representing
the gradient backpropageted to the symbol_values input of the op
you are differentiating through.
|
generate_differentiable_op
generate_differentiable_op(
*, sampled_op=None, analytic_op=None
)
Generate a differentiable op by attaching self to an op.
This function returns a tf.function
that passes values through to
forward_op
during the forward pass and this differentiator (self
) to
backpropagate through the op during the backward pass. If sampled_op
is provided the differentiators differentiate_sampled
method will
be invoked (which requires sampled_op to be a sample based expectation
op with num_samples input tensor). If analytic_op is provided the
differentiators differentiate_analytic
method will be invoked (which
requires analytic_op to be an analytic based expectation op that does
NOT have num_samples as an input). If both sampled_op and analytic_op
are provided an exception will be raised.
This generate_differentiable_op()
can be called only ONCE because
of the one differentiator per op
policy. You need to call refresh()
to reuse this differentiator with another op.
Args | |
---|---|
sampled_op
|
A callable op that you want to make differentiable
using this differentiator's differentiate_sampled method.
|
analytic_op
|
A callable op that you want to make differentiable
using this differentiators differentiate_analytic method.
|
Returns | |
---|---|
A callable op that who's gradients are now registered to be
a call to this differentiators differentiate_* function.
|
get_gradient_circuits
@abc.abstractmethod
get_gradient_circuits( programs, symbol_names, symbol_values )
Return circuits to compute gradients for given forward pass circuits.
Prepares (but does not execute) all intermediate circuits needed to
calculate the gradients for the given forward pass circuits specified by
programs
, symbol_names
, and symbol_values
. The returned
tf.Tensor
objects give all necessary information to recreate the
internal logic of the differentiator.
This base class defines the standard way to use the outputs of this
function to obtain either analytic gradients or sample gradients.
Below is code that is copied directly from the differentiate_analytic
default implementation, which is then compared to how one could
automatically get this gradient. The point is that the derivatives of
some functions cannot be calculated via the available auto-diff (such
as when the function is not expressible efficiently as a PauliSum),
and then one would need to use get_gradient_circuits
the manual way.
Suppose we have some inputs programs
, symbol_names
, and
symbol_values
. To get the derivative of the expectation values of a
tensor of PauliSums pauli_sums
with respect to these inputs, do:
diff = <some differentiator>()
(
batch_programs, new_symbol_names, batch_symbol_values,
batch_weights, batch_mapper
) = diff.get_gradient_circuits(
programs, symbol_names, symbol_values)
exp_layer = tfq.layers.Expectation()
batch_pauli_sums = tf.tile(
tf.expand_dims(pauli_sums, 1),
[1, tf.shape(batch_programs)[1], 1])
n_batch_programs = tf.reduce_prod(tf.shape(batch_programs))
n_symbols = tf.shape(new_symbol_names)[0]
n_ops = tf.shape(pauli_sums)[1]
batch_expectations = tfq.layers.Expectation()(
tf.reshape(batch_programs, [n_batch_programs]),
symbol_names=new_symbol_names,
symbol_values=tf.reshape(
batch_symbol_values, [n_batch_programs, n_symbols]),
operators=tf.reshape(
batch_pauli_sums, [n_batch_programs, n_ops]))
batch_expectations = tf.reshape(
batch_expectations, tf.shape(batch_pauli_sums))
batch_jacobian = tf.map_fn(
lambda x: tf.einsum('km,kmp->kp', x[0], tf.gather(x[1], x[2])),
(batch_weights, batch_expectations, batch_mapper),
fn_output_signature=tf.float32)
grad_manual = tf.reduce_sum(batch_jacobian, -1)
To perform the same gradient calculation automatically:
with tf.GradientTape() as g:
g.watch(symbol_values)
exact_outputs = tfq.layers.Expectation()(
programs, symbol_names=symbol_names,
symbol_values=symbol_values, operators=pauli_sums)
grad_auto = g.gradient(exact_outputs, symbol_values)
tf.math.reduce_all(grad_manual == grad_auto).numpy()
True
Args | |
---|---|
programs
|
tf.Tensor of strings with shape [batch_size] containing
the string representations of the circuits to be executed during
the forward pass.
|
symbol_names
|
tf.Tensor of strings with shape [n_params], which is
used to specify the order in which the values in symbol_values
should be placed inside of the circuits in programs .
|
symbol_values
|
tf.Tensor of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs during the forward pass,
following the ordering dictated by symbol_names .
|
Returns | |
---|---|
batch_programs
|
2-D tf.Tensor of strings representing circuits to
run to evaluate the gradients. The first dimension is the length
of the input programs . At each index i in the first
dimension is the tensor of circuits required to evaluate the
gradient of the input circuit programs[i] . The size of the
second dimension is determined by the inheriting differentiator.
|
new_symbol_names
|
tf.Tensor of strings, containing the name of
every symbol used in every circuit in batch_programs . The
length is determined by the inheriting differentiator.
|
batch_symbol_values
|
3-D tf.Tensor of DType tf.float32
containing values to fill in to every parameter in every
circuit. The first two dimensions are the same shape as
batch_programs ; the last dimension is the length of
new_symbol_names . Thus, at each index i in the first
dimension is the 2-D tensor of parameter values to fill in to
batch_programs[i] .
|
batch_weights
|
3-D tf.Tensor of DType tf.float32 which defines
how much weight to give to each program when computing the
derivatives. First dimension is the length of the input
programs , second dimension is the length of the input
symbol_names , and the third dimension is determined by the
inheriting differentiator.
|
batch_mapper
|
3-D tf.Tensor of DType tf.int32 which defines
how to map expectation values of the circuits generated by this
differentiator to the derivatives of the original circuits.
It says which indices of the returned programs are relevant for
the derivative of each symbol, for use by tf.gather .
The first dimension is the length of the input programs , the
second dimension is the length of the input symbol_names ,
and the third dimension is the length of the last dimension of
the output batch_weights .
|
refresh
refresh()
Refresh this differentiator in order to use it with other ops.