Recently, Artificial intelligence (AI) algorithms have received increasable interest in various application domains including in Air Transportation Management (ATM). Different AI in particular Machine Learning (ML) algorithms are used to provide decision support in autonomous decision-making tasks in the ATM domain e.g., predicting air transportation traffic and optimizing traffic flows. However, most of the time these automated systems are not accepted or trusted by the intended users as the decisions provided by AI are often opaque, non-intuitive and not understandable by human operators. Safety is the major pillar to air traffic management, and no black box process can be inserted in a decision-making process when human life is involved. To address this challenge related to transparency of the automated system in the ATM domain, we investigated AI methods in predicting air transportation traffic conflict and optimizing traffic flows based on the domain of Explainable Artificial Intelligence (XAI). Here, AI models’ explainability in terms of understanding a decision i.e., post hoc interpretability and understanding how the model works i.e., transparency can be provided for air traffic controllers. In this paper, we report our research directions and our findings to support better decision making with AI algorithms with extended transparency.
Usage of more transparent and explainable conflict resolution algorithm: Air traffic controller feedback / Hurter, C.; Degas, A.; Guibert, A.; Durand, N.; Ferreira, A.; Cavagnetto, N.; Islam, M. R.; Barua, S.; Ahmed, M. U.; Begum, S.; Bonelli, S.; Cartocci, G.; Flumeri, G. D.; Borghini, G.; Babiloni, F.; Aricó, P.. - In: TRANSPORTATION RESEARCH PROCEDIA. - ISSN 2352-1465. - 66:C(2022), pp. 270-278. (Intervento presentato al convegno 34th Conference of the European Association for Aviation Psychology tenutosi a Gibraltar; UK) [10.1016/j.trpro.2022.12.027].
Usage of more transparent and explainable conflict resolution algorithm: Air traffic controller feedback
Cartocci, G.;Borghini, G.;Babiloni, F.;Aricó, P.
2022
Abstract
Recently, Artificial intelligence (AI) algorithms have received increasable interest in various application domains including in Air Transportation Management (ATM). Different AI in particular Machine Learning (ML) algorithms are used to provide decision support in autonomous decision-making tasks in the ATM domain e.g., predicting air transportation traffic and optimizing traffic flows. However, most of the time these automated systems are not accepted or trusted by the intended users as the decisions provided by AI are often opaque, non-intuitive and not understandable by human operators. Safety is the major pillar to air traffic management, and no black box process can be inserted in a decision-making process when human life is involved. To address this challenge related to transparency of the automated system in the ATM domain, we investigated AI methods in predicting air transportation traffic conflict and optimizing traffic flows based on the domain of Explainable Artificial Intelligence (XAI). Here, AI models’ explainability in terms of understanding a decision i.e., post hoc interpretability and understanding how the model works i.e., transparency can be provided for air traffic controllers. In this paper, we report our research directions and our findings to support better decision making with AI algorithms with extended transparency.File | Dimensione | Formato | |
---|---|---|---|
Hurter_Usage_2022.pdf
accesso aperto
Note: https://doi.org/10.1016/j.trpro.2022.12.027
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
931.28 kB
Formato
Adobe PDF
|
931.28 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.