We present an accelerated algorithm for the solution of static Hamilton-Jacobi- Bellman equations related to optimal control problems. Our scheme is based on a classic policy iteration procedure, which is known to have super-linear convergence in many relevant cases provided the initial guess is sufficiently close to the solution. This limitation often degenerates into a behavior similar to a value iteration method, with an increased computation time. The new scheme circumvents this problem by combining the advantages of both algorithms with an efficient coupling. The method starts with a coarse-mesh value iteration phase and then switches to a fine-mesh policy iteration procedure when a certain error threshold is reached. A delicate point is to determine this threshold in order to avoid cumbersome computations with the value iteration and at the same time, to ensure the convergence of the policy iteration method to the optimal solution. We analyze the methods and efficient coupling in a number of examples in different dimensions, illustrating their properties.

An efficient policy iteration algorithm for dynamic programming equations / Alla, A.; Falcone, Maurizio; Kalise, D.. - In: SIAM JOURNAL ON SCIENTIFIC COMPUTING. - ISSN 1064-8275. - STAMPA. - 37:1(2015), pp. 181-200.

An efficient policy iteration algorithm for dynamic programming equations

A. Alla;FALCONE, Maurizio
;
2015

Abstract

We present an accelerated algorithm for the solution of static Hamilton-Jacobi- Bellman equations related to optimal control problems. Our scheme is based on a classic policy iteration procedure, which is known to have super-linear convergence in many relevant cases provided the initial guess is sufficiently close to the solution. This limitation often degenerates into a behavior similar to a value iteration method, with an increased computation time. The new scheme circumvents this problem by combining the advantages of both algorithms with an efficient coupling. The method starts with a coarse-mesh value iteration phase and then switches to a fine-mesh policy iteration procedure when a certain error threshold is reached. A delicate point is to determine this threshold in order to avoid cumbersome computations with the value iteration and at the same time, to ensure the convergence of the policy iteration method to the optimal solution. We analyze the methods and efficient coupling in a number of examples in different dimensions, illustrating their properties.
2015
controllo ottimo; programmazione dinamica; approssimazione numerica
01 Pubblicazione su rivista::01a Articolo in rivista
An efficient policy iteration algorithm for dynamic programming equations / Alla, A.; Falcone, Maurizio; Kalise, D.. - In: SIAM JOURNAL ON SCIENTIFIC COMPUTING. - ISSN 1064-8275. - STAMPA. - 37:1(2015), pp. 181-200.
File allegati a questo prodotto
File Dimensione Formato  
Alla_Efficient-policy_2015.pdf

accesso aperto

Note: Articolo principale
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 806.51 kB
Formato Adobe PDF
806.51 kB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/667022
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 50
  • ???jsp.display-item.citation.isi??? 48
social impact