Graphs provide a unifying language to reason about structure, constraints, and flow in both engineered systems and machine learning pipelines. This thesis advances that perspective along three directions. First, it studies graph based optimization for real-world, networked settings where communication, computation, and sensing are interdependent. By coupling connectivity, routing, and data reduction within a single planning framework, the work develops scalable algorithms that improve task completion, latency, and resilience under realistic resource limits. Second, it investigates explainability for graph learning through counterfactual reasoning. The thesis formulates a unified view in which changes to node features, edge attributes, and topology are optimized to produce compact, faithful, and plausible “what-if” explanations. It further explores how these counterfactuals can be rendered as clear, human readable narratives, evaluated with both quantitative criteria and user studies. Third, it moves beyond the graph domain by introducing training time mechanisms that use counterfactual signals to shape decision boundaries. This regularization improves generalization while amortizing the cost of explanation, enabling rapid retrieval of informative counterfactuals at inference. Finally, the thesis proposes a lightweight pipeline to build counterfactual narratives in the domain of tabular data, this two-stage process balances quality and efficiency, making explanation generation practical for latency and energy constrained deployments. Together, these contributions show that graphs provide a powerful tool to model complex, real-world systems under realistic resource and reliability constraints. Moreover, our work demonstrates that, despite the complexity of their structure, graphs and related learning models can be effectively explained. Finally, we show that explanation techniques used in graphs can be extended to regularize and explain other machine learning models.
Graphs in action: from real-world systems to explainable AI and beyond / Giorgi, Flavio. - (2026 May 11).
Graphs in action: from real-world systems to explainable AI and beyond
GIORGI, FLAVIO
11/05/2026
Abstract
Graphs provide a unifying language to reason about structure, constraints, and flow in both engineered systems and machine learning pipelines. This thesis advances that perspective along three directions. First, it studies graph based optimization for real-world, networked settings where communication, computation, and sensing are interdependent. By coupling connectivity, routing, and data reduction within a single planning framework, the work develops scalable algorithms that improve task completion, latency, and resilience under realistic resource limits. Second, it investigates explainability for graph learning through counterfactual reasoning. The thesis formulates a unified view in which changes to node features, edge attributes, and topology are optimized to produce compact, faithful, and plausible “what-if” explanations. It further explores how these counterfactuals can be rendered as clear, human readable narratives, evaluated with both quantitative criteria and user studies. Third, it moves beyond the graph domain by introducing training time mechanisms that use counterfactual signals to shape decision boundaries. This regularization improves generalization while amortizing the cost of explanation, enabling rapid retrieval of informative counterfactuals at inference. Finally, the thesis proposes a lightweight pipeline to build counterfactual narratives in the domain of tabular data, this two-stage process balances quality and efficiency, making explanation generation practical for latency and energy constrained deployments. Together, these contributions show that graphs provide a powerful tool to model complex, real-world systems under realistic resource and reliability constraints. Moreover, our work demonstrates that, despite the complexity of their structure, graphs and related learning models can be effectively explained. Finally, we show that explanation techniques used in graphs can be extended to regularize and explain other machine learning models.| File | Dimensione | Formato | |
|---|---|---|---|
|
Tesi_dottorato_Giorgi.pdf
accesso aperto
Note: tesi completa
Tipologia:
Tesi di dottorato
Licenza:
Creative commons
Dimensione
17.86 MB
Formato
Adobe PDF
|
17.86 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


