Convergence of the policy iteration method for discrete and continuous optimal control problems holds under general assumptions. Moreover, in some circumstances, it is also possible to show a quadratic rate of convergence for the algorithm. For Mean Field Games, convergence of the policy iteration method has been recently proved in [9]. Here, we provide an estimate of its rate of convergence.
Rates of convergence for the policy iteration method for Mean Field Games systems / Camilli, F.; Tang, Q.. - In: JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS. - ISSN 0022-247X. - 512:1(2022), p. 126138. [10.1016/j.jmaa.2022.126138]
Rates of convergence for the policy iteration method for Mean Field Games systems
Camilli F.;Tang Q.
2022
Abstract
Convergence of the policy iteration method for discrete and continuous optimal control problems holds under general assumptions. Moreover, in some circumstances, it is also possible to show a quadratic rate of convergence for the algorithm. For Mean Field Games, convergence of the policy iteration method has been recently proved in [9]. Here, we provide an estimate of its rate of convergence.File | Dimensione | Formato | |
---|---|---|---|
Camilli_Rates_2022.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
386.27 kB
Formato
Adobe PDF
|
386.27 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.