In this thesis we propose new iteratively constructed preconditioners, to be paired with Conjugate Gradient-type algorithms, in order to efficiently solve large scale unconstrained optimization problems. On this guideline, the central thread of this thesis is the use of conjugate directions given by Conjugate Gradient or SYMMBK algorithms. To this aim, in Chapter 1 we recall some results about iterative methods for solving linear systems. In particular, we introduce both the Conjugate Gradient method and the Lanczos process. Finally, the idea of preconditioning is given and the well known Preconditioned Conjugate Gradient method is provided. In Chapter 2 we deal with large scale unconstrained optimization problems, recalling the Nonlinear Conjugate Gradient (NCG) method and its preconditioned version, the quasi-Newton methods, including the damped techniques, and finally the linesearch-based Inexact Newton methods. The main contribution in this thesis is given by Chapters 3-7. In particular, Chapter 3 provides new preconditioners to be used within the NCG method. This class of preconditioners draws inspiration from quasi-Newton updates, in order to obtain a good approximation of the inverse of the Hessian matrix. On detail, at the current iteration of the NCG we consider some preconditioners based on new low-rank quasi-Newton symmetric updating formulae, obtained as by-product of the NCG method at the previous steps. In particular, we propose five classes of preconditioners: in the first three classes the major drawback is that preconditioner might not be positive definite. In the fourth class, a preconditoner that is positive definite and satisfies the secant equation at the current iteration is given. Finally, in the last class, a preconditoner that is positive definite and satisfies the modified secant equation at each iteration is provided. However, in general, the search direction pk we compute at iteration k of NCG could be not well scaled, which may introduce some ill-conditioning when applying Preconditioned Nonlinear Conjugate Gradient (PNCG). In order to reduce the scaling problem, in Chapter 4 we focus on the use of damped techniques within NCG methods. Damped techniques were introduced by Powell and recently reproposed by Al-Baali, in the framework of quasi-Newton methods. New damped techniques are proposed and are embedded within a class of preconditioners described in Chapter 3 (see Section 3.3.4). In Chapter 5, by referring to the Polak-Ribière (PR) method, we firstly give theoretical results on the global convergence for an effective PNCG algorithm. Secondly, we investigate the use of a damped vector in the definition of the scalar k, hence affecting the definition of the search direction and producing a modified NCG/PNCG method. On this purpose, we prove that some global convergence properties still hold for the modified damped Polak-Ribière (D-PR-PNCG) method, while substantially preserving numerical performance. In Chapters 6-7 we focus on linesearch-based Newton-Krylov methods for solving large scale unconstrained optimization problems. In particular, in Chapter 6 we try to improve the efficiency of Newton-Krylov methods by proposing an adaptive truncation rule, for deciding the maximum number of inner iterations allowed at each outer iteration. The adaptive truncation rule is tested both within the unpreconditioned and the preconditioned framework proposed in . Finally, Chapter 7 represents a work in progress: on detail, starting from  we deal with a class of preconditioners specifically suited for large indefinite linear systems, and may be obtained as by-product of Krylov subspace solvers, as well as by applying L-BFGS updates. In particular, in Chapter 7, only some preliminaries and the structure of our class of preconditioners, namely AINVK, are provided. At the end, conclusions are given.
Advances in large scale unconstrained optimization: novel preconditioning strategies for Nonlinear Conjugate Gradient methods and new developments in Newton-Krylov methods / Caliciotti, Andrea. - (2018 Feb 19).
|Titolo:||Advances in large scale unconstrained optimization: novel preconditioning strategies for Nonlinear Conjugate Gradient methods and new developments in Newton-Krylov methods|
|Data di discussione:||19-feb-2018|
|Appartiene alla tipologia:||07a Tesi di Dottorato|