Abstract:
Multi-step quasi-Newton methods for optimization use data from more than one previous step to construct the current Hessian approximation. These methods were introduced by Ford and Moughrabi in [3,4],
where they showed how to construct such methods by means of interpolating curves. To produce a better
parametrization of the interpolation, Ford [2] developed the idea of “implicit” methods. In this paper, we
describe the derivation of new implicit updates which are similar to the methods $\mathbf{14}$ and $\mathbf{15}$ developed in [7]. The experimental results we present here show that both of the new methods produce better performance than the existing methods, particularly as the dimension of the test problem grows.