tfg.math.optimizer.levenberg_marquardt.minimize
Stay organized with collections
Save and categorize content based on your preferences.
Minimizes a set of residuals in the least-squares sense.
tfg.math.optimizer.levenberg_marquardt.minimize(
residuals,
variables,
max_iterations,
regularizer=1e-20,
regularizer_multiplier=10.0,
callback=None,
name='levenberg_marquardt_minimize'
)
Args |
residuals
|
A residual or a list/tuple of residuals. A residual is a Python
callable .
|
variables
|
A variable or a list or tuple of variables defining the starting
point of the minimization.
|
max_iterations
|
The maximum number of iterations.
|
regularizer
|
The regularizer is used to damped the stepsize when the
iterations are becoming unstable. The bigger the regularizer is the
smaller the stepsize becomes.
|
regularizer_multiplier
|
If an iteration does not decrease the objective a
new regularizer is computed by scaling it by this multiplier.
|
callback
|
A callback function that will be called at each iteration. In
graph mode the callback should return an op or list of ops that will
execute the callback logic. The callback needs to be of the form
f(iteration, objective_value, variables). A callback is a Python
callable . The callback could be used for logging, for example if one
wants to print the objective value at each iteration.
|
name
|
A name for this op. Defaults to "levenberg_marquardt_minimize".
|
Returns |
The value of the objective function and variables attained at the final
iteration of the minimization procedure.
|
Raises |
ValueError
|
If max_iterations is not at least 1.
|
InvalidArgumentError
|
This exception is only raised in graph mode if the
Cholesky decomposition is not successful. One likely fix is to increase
the regularizer. In eager mode this exception is catched and the regularizer
is increased automatically.
|
Examples |
x = tf.constant(np.random.random_sample(size=(1,2)), dtype=tf.float32)
y = tf.constant(np.random.random_sample(size=(3,1)), dtype=tf.float32)
def f1(x, y):
return x + y
def f2(x, y):
return x * y
def callback(iteration, objective_value, variables):
def print_output(iteration, objective_value, *variables):
print("Iteration:", iteration, "Objective Value:", objective_value)
for variable in variables:
print(variable)
inp = [iteration, objective_value] + variables
return tf.py_function(print_output, inp, [])
minimize_op = minimize(residuals=(f1, f2),
variables=(x, y),
max_iterations=10,
callback=callback)
if not tf.executing_eagerly():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(minimize_op)
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-10-28 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2022-10-28 UTC."],[],[]]