Understanding and improving how people interact with digital systems requires both reliable signals about user context and practical methods for evaluating interface quality. This thesis investigates how two complementary forms of intelligence, implicit sensing and Large Language Models (LLMs), can support these goals across real-time interaction and design-time analysis. Part I focuses on AI for interaction, studying implicit interaction mechanisms in which smartphones act as sensing and inference devices that reduce the need for explicit user input. The thesis introduces and evaluates mobile sensing techniques capable of recognizing car-related events and behaviors, including parking and unparking transitions and cruising-for-parking patterns, under constraints of deployability and low resource consumption. It further explores how short-range wireless cues, such as Bluetooth Low Energy, can complement motion and location traces to enable context-aware support in mobility settings. Together, these contributions show how implicit sensing can reduce user effort and enable proactive assistance, while also surfacing the interaction challenges introduced by uncertainty and misclassification. Part II examines LLMs for interaction engineering and design evaluation. Rather than treating LLMs as general-purpose chat interfaces, the thesis studies them as components within interactive pipelines that generate inspectable intermediate artifacts. Through a set of empirical and methodological studies, it investigates LLM-supported evaluation (including walkthrough-inspired critique generation and adaptations of heuristic evaluation for proactive and implicit systems), LLM adoption in early-stage need finding, and the automation of structured tasks such as transforming unstructured descriptions into form-ready input. The findings highlight a recurring duality: LLMs can accelerate work and produce useful, context-sensitive outputs, but they can also introduce plausible yet incorrect content and inconsistent reasoning. The thesis therefore treats fallibility as a design constraint, examining workflows and interaction patterns that make uncertainty visible, support verification, and enable effective correction and repair while preserving user control and trust. Overall, this thesis presents a unified view of AI-augmented interaction that spans low-level sensing of human behavior and high-level reasoning about interfaces and user tasks. It argues that the practical value of AI in interactive systems depends less on fully eliminating errors and more on designing mechanisms that keep humans informed, in control, and able to recover when automation fails.

AI methods for interactive systems: implicit behaviour detection and large language model–based design evaluation / Zeppieri, Stefano. - (2026 May 11).

AI methods for interactive systems: implicit behaviour detection and large language model–based design evaluation

ZEPPIERI, STEFANO
11/05/2026

Abstract

Understanding and improving how people interact with digital systems requires both reliable signals about user context and practical methods for evaluating interface quality. This thesis investigates how two complementary forms of intelligence, implicit sensing and Large Language Models (LLMs), can support these goals across real-time interaction and design-time analysis. Part I focuses on AI for interaction, studying implicit interaction mechanisms in which smartphones act as sensing and inference devices that reduce the need for explicit user input. The thesis introduces and evaluates mobile sensing techniques capable of recognizing car-related events and behaviors, including parking and unparking transitions and cruising-for-parking patterns, under constraints of deployability and low resource consumption. It further explores how short-range wireless cues, such as Bluetooth Low Energy, can complement motion and location traces to enable context-aware support in mobility settings. Together, these contributions show how implicit sensing can reduce user effort and enable proactive assistance, while also surfacing the interaction challenges introduced by uncertainty and misclassification. Part II examines LLMs for interaction engineering and design evaluation. Rather than treating LLMs as general-purpose chat interfaces, the thesis studies them as components within interactive pipelines that generate inspectable intermediate artifacts. Through a set of empirical and methodological studies, it investigates LLM-supported evaluation (including walkthrough-inspired critique generation and adaptations of heuristic evaluation for proactive and implicit systems), LLM adoption in early-stage need finding, and the automation of structured tasks such as transforming unstructured descriptions into form-ready input. The findings highlight a recurring duality: LLMs can accelerate work and produce useful, context-sensitive outputs, but they can also introduce plausible yet incorrect content and inconsistent reasoning. The thesis therefore treats fallibility as a design constraint, examining workflows and interaction patterns that make uncertainty visible, support verification, and enable effective correction and repair while preserving user control and trust. Overall, this thesis presents a unified view of AI-augmented interaction that spans low-level sensing of human behavior and high-level reasoning about interfaces and user tasks. It argues that the practical value of AI in interactive systems depends less on fully eliminating errors and more on designing mechanisms that keep humans informed, in control, and able to recover when automation fails.
11-mag-2026
File allegati a questo prodotto
File Dimensione Formato  
Tesi_dottorato_Zeppieri.pdf

accesso aperto

Note: Tesi_PhD___AI_Methods_for_Interactive_Systems__Implicit_Behaviour_Detection_and_Large_Language_Model_Based_Design_Evaluation
Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 18 MB
Formato Adobe PDF
18 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1767658
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact