We study the policy iteration method for solving discounted infinite-horizon mean field games. At the continuous level, a policy iteration algorithm can be used to establish the existence and uniqueness of solutions for mean field games with a large discount factor lambda. At a discrete level, it can be used to compute a solution of the problem. To implement the method, we employ a semi-Lagrangian method, where the Hamilton-Jacobi-Bellman equation is first discretized in time using the dynamic programming principle and then in space by projecting onto a grid. To support our theoretical findings, we present numerical examples in both one and two dimensions.
Policy iteration method for discounted infinite horizon mean field games: the semi-Lagrangian approach / Tang, Q.; Camilli, Fabio.; Zhou, Yongshen.. - In: SCIENCE CHINA. INFORMATION SCIENCES. - ISSN 1674-733X. - 68:11(2025). [10.1007/s11432-025-4646-9]
Policy iteration method for discounted infinite horizon mean field games: the semi-Lagrangian approach
Tang Q.
;Camilli Fabio.;ZHOU YONGSHEN.
2025
Abstract
We study the policy iteration method for solving discounted infinite-horizon mean field games. At the continuous level, a policy iteration algorithm can be used to establish the existence and uniqueness of solutions for mean field games with a large discount factor lambda. At a discrete level, it can be used to compute a solution of the problem. To implement the method, we employ a semi-Lagrangian method, where the Hamilton-Jacobi-Bellman equation is first discretized in time using the dynamic programming principle and then in space by projecting onto a grid. To support our theoretical findings, we present numerical examples in both one and two dimensions.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


