This entry illustrates the application of Bellman’s dynamic programming principle within the context of optimal control problems for continuous-time dynamical systems. The approach leads to a characterization of the optimal value of the cost functional, over all possible trajectories given the initial conditions, in terms of a partial differential equation called the Hamilton–Jacobi–Bellman equation. Importantly, this can be used to synthesize the corresponding optimal control input as a state-feedback law.
Optimal control and the Dynamic Programming Principle / Falcone, Maurizio. - STAMPA. - (2015), pp. 956-961. [10.1007/978-1-4471-5058-9_209].
Optimal control and the Dynamic Programming Principle
FALCONE, Maurizio
2015
Abstract
This entry illustrates the application of Bellman’s dynamic programming principle within the context of optimal control problems for continuous-time dynamical systems. The approach leads to a characterization of the optimal value of the cost functional, over all possible trajectories given the initial conditions, in terms of a partial differential equation called the Hamilton–Jacobi–Bellman equation. Importantly, this can be used to synthesize the corresponding optimal control input as a state-feedback law.File | Dimensione | Formato | |
---|---|---|---|
Falcone_Optimal-control_2015.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
123.83 kB
Formato
Adobe PDF
|
123.83 kB | Adobe PDF | Contatta l'autore |
Falcone_frontespizio_Optimal-control_2015.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
186.43 kB
Formato
Adobe PDF
|
186.43 kB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.