首页 > 解决方案 > 实现无约束优化问题的回溯线搜索算法

问题描述

我无法理解如何在 python 中实现回溯线搜索算法。算法本身是: 这里

算法的另一种形式是: 这里

理论上,它们是完全相同的。

我正在尝试在 python 中实现这一点,以解决具有给定起点的无约束优化问题。这是我迄今为止解决的尝试:

def func(x):  
return # my function with inputs x1,x2

def grad_func(x):
  df1 # derivative with respect to x1
  df2 # derivative with respect to x2
  return np.array([df1, df2])

def backtrack(x, gradient, t, a, b):  
 '''  
   x: the initial values given  
   gradient: the initial gradient direction for the given initial value  
   t: t is initialized at t=1 
   a: alpha value between (0, .5). I set it to .3  
   b: beta value between (0, 1). I set it to .8  
 '''
 return t

# Define the initial point, step size, and alpha/beta constants
x0, t0, alpha, beta = [x1, x2], 1, .3, .8

# Find the gradient of the initial value to determine the initial slope
direction = grad_func(x0)

t = backtrack(x0, direction, t0, alpha, beta)

任何人都可以就如何最好地实现回溯算法提供任何指导吗?我觉得我有我需要的所有信息,但我只是不明白代码中的实现

标签: pythonalgorithm

解决方案


import numpy as np
alpha = 0.3
beta = 0.8

f = lambda x: (x[0]**2 + 3*x[1]*x[0] + 12)
dfx1 = lambda x: (2*x[0] + 3*x[1])
dfx2 = lambda x: (3*x[0])

t = 1
count = 1
x0 = np.array([2,3])
dx0 = np.array([.1, 0.05])


def backtrack(x0, dfx1, dfx2, t, alpha, beta, count):
    while (f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)])))) < 0:
        t *= beta
        print("""

########################
###   iteration {}   ###
########################
""".format(count))
        print("Inequality: ",  f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)]))))
        count += 1
    return t

t = backtrack(x0, dfx1, dfx2, t, alpha, beta,count)

print("\nfinal step size :",  t)

输出:

########################
###   iteration 1   ###
########################

Inequality:  -143.12


########################
###   iteration 2   ###
########################

Inequality:  -73.22880000000006


########################
###   iteration 3   ###
########################

Inequality:  -32.172032000000044


########################
###   iteration 4   ###
########################

Inequality:  -8.834580480000021


########################
###   iteration 5   ###
########################

Inequality:  3.7502844927999845

final step size : 0.32768000000000014
[Finished in 0.257s]

推荐阅读