Purpose As artificial intelligence (AI) systems become integral to crisis response and continuity planning, organizations face a fundamental challenge: how to preserve human judgment in machine-rich decision environments. This study investigates how interpretive capacity—the ability to question, contextualize, and ethically manage AI outputs—shapes business continuity under conditions of volatility, uncertainty, complexity, and ambiguity. It introduces critical interpretability as a cognitive–organizational capability that determines whether AI enhances or undermines resilience. Methodology Using a multiple case study design, the research examines ten Italian firms across high-impact sectors (energy, health, logistics, and finance) that deployed AI tools during crises between 2020 and 2024. Data were collected through interviews with executives, data scientists, and crisis managers, supplemented by internal documentation and archival materials. A grounded theory approach enabled the inductive development of a multi-level framework connecting human cognition, organizational design, and AI system features. Findings The results show that effective continuity arises not from technological sophistication alone, but from the depth of human engagement with AI. When decision-makers actively interpret, challenge, and ethically calibrate algorithmic recommendations, continuity responses become adaptive and contextually sound. The study identifies three enabling layers—AI system capabilities, human interpretive practices, and organizational conditions—whose alignment supports ethical and resilient action. Misalignment, by contrast, leads to brittle automation and ethical blind spots. Research limitations and implications While limited to a specific national and temporal context, the study provides a strong conceptual foundation for future cross-national and longitudinal research on interpretability and resilience. It suggests new directions for measuring interpretive maturity and for integrating cognitive and ethical dimensions into AI governance frameworks. Originality and value The paper challenges techno-deterministic perspectives by repositioning AI not as a replacement for human cognition, but as a catalyst for interpretive and ethical reasoning. It presents the first empirically grounded framework of critical interpretability in the context of organizational continuity, offering novel insights for scholars and practitioners seeking to balance automation, accountability, and adaptive sensemaking in high-stakes environments.

Rethinking Business Continuity: A Critical Interpretability Perspective on Human-AI Crisis Integration / Lo Conte, Davide Liberato; Sancetta, Giuseppe; Antonini, Valerio. - (2025). ( EISIC 28th Belgrado, Serbia ).

Rethinking Business Continuity: A Critical Interpretability Perspective on Human-AI Crisis Integration

Davide Liberato lo Conte
;
Giuseppe Sancetta;
2025

Abstract

Purpose As artificial intelligence (AI) systems become integral to crisis response and continuity planning, organizations face a fundamental challenge: how to preserve human judgment in machine-rich decision environments. This study investigates how interpretive capacity—the ability to question, contextualize, and ethically manage AI outputs—shapes business continuity under conditions of volatility, uncertainty, complexity, and ambiguity. It introduces critical interpretability as a cognitive–organizational capability that determines whether AI enhances or undermines resilience. Methodology Using a multiple case study design, the research examines ten Italian firms across high-impact sectors (energy, health, logistics, and finance) that deployed AI tools during crises between 2020 and 2024. Data were collected through interviews with executives, data scientists, and crisis managers, supplemented by internal documentation and archival materials. A grounded theory approach enabled the inductive development of a multi-level framework connecting human cognition, organizational design, and AI system features. Findings The results show that effective continuity arises not from technological sophistication alone, but from the depth of human engagement with AI. When decision-makers actively interpret, challenge, and ethically calibrate algorithmic recommendations, continuity responses become adaptive and contextually sound. The study identifies three enabling layers—AI system capabilities, human interpretive practices, and organizational conditions—whose alignment supports ethical and resilient action. Misalignment, by contrast, leads to brittle automation and ethical blind spots. Research limitations and implications While limited to a specific national and temporal context, the study provides a strong conceptual foundation for future cross-national and longitudinal research on interpretability and resilience. It suggests new directions for measuring interpretive maturity and for integrating cognitive and ethical dimensions into AI governance frameworks. Originality and value The paper challenges techno-deterministic perspectives by repositioning AI not as a replacement for human cognition, but as a catalyst for interpretive and ethical reasoning. It presents the first empirically grounded framework of critical interpretability in the context of organizational continuity, offering novel insights for scholars and practitioners seeking to balance automation, accountability, and adaptive sensemaking in high-stakes environments.
2025
EISIC 28th
Artificial Intelligence; Crisis Management; Business Continuity; Human–AI Collaboration; Critical Interpretability.
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Rethinking Business Continuity: A Critical Interpretability Perspective on Human-AI Crisis Integration / Lo Conte, Davide Liberato; Sancetta, Giuseppe; Antonini, Valerio. - (2025). ( EISIC 28th Belgrado, Serbia ).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1764907
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact