In safety-critical domains, the pursuit of “safe operations” has long been framed around the prevention of adverse events through, typically, risk management. However, such a narrow focus fails to account for the true goals of complex systems: to maintain coherent and sustained operations that continuously balance safety and productivity under conditions of uncertainty and performance variability, (e.g.) unexpected events. Performing resiliently is a key ability that socio-technical systems should possess. The emergence of unexpected events, driven by the inherent complexity of socio-technical systems, has challenged traditional safety science over time especially. This has led to a gradual shift away from a risk-focused perspective, toward a resilience-oriented one. While the risk-based approach emphasized the need for knowledge in order to exercise control, the resilience perspective recognizes the limits of what can be known, and instead favours the system's ability to adapt in the face of surprises. Accordingly, demonstrating adaptation requires not being prepared to what might happen, laying in a state of “ignorance”, where the unknown is – however – correctly managed. At the extreme, the system that is potentially the most capable of adapting, is the one with least knowledge. But does that make it the most resilient, too? The answer is likely no. Trade-offs are around the corner (Hollnagel, 2009). A certain level of control – and knowledge that comes with it – is indeed essential for managing the unknown, resiliently. One might therefore assume that the more a system is controlled – and the more it knows – the better it is prepared to the unknown, thereby favouring control over adaptation to ensure the resilient performance. We have hints this is not true at all. The RESIST Project During the RESIST project (RESilience Management for Industrial Systems Threats – cf. the acknowledgment section), the research team (here represented by F.S. and R.P.) carried out an experimental campaign involving Human-Hardware-in-the-Loop (HHIL) simulations (Nakhal Akel, et al., 2024). The HHIL approach combines real-world data from human operations and hardware functioning with a digital model of the complex system under analysis to ensure high-fidelity simulation results. The simulations involved human operators to detect failures and then restore operations on a mock-up plant mimicking an oil and gas extraction process. Different operator personas were tested, ranging from those working under tightly controlled conditions to those given greater freedom to adapt, depending on the level of practicability of the work conditions. A simple resilience measure computed over the simulations results shows how different personas performed worse (on average) and more variably at both extremes (Figure 1). While these early results are not yet statistically significant, the empirical observations suggested that controllability and adaptability cannot be pushed in isolation. Instead, the two “forces” must be balanced to place the system in a state with a greater potential for performing resiliently. If complex adaptive socio-technical systems must be capable of functioning safely, purposefully sustaining adaptive capacity, then we must ask: what kind of system design and organisational approach makes that possible in a complex, variable world? What are the implications for controllability and adaptability? A truly stable system must survive in a world full of surprises. Ashby’s Law of Requisite Variety states that to remain viable, a system must be able to respond to every type of perturbation it might face. In this sense, what is referred to is adaptability or adaptive capacity. Since the number of possible disruptions is practically infinite; no system can rely on predefined responses alone. Not all contingencies can be engineered in advance. Neither, significantly, is there unlimited or limitless adaptive capacity, The cost of coordination becomes increasingly limiting for example. We contend that two kinds of capacity are required: (i) designed control, and (ii) emergent creativity. In complexity terms, many desired states are “multiply realisable” – reachable through many pathways. But not all paths work every time. The ability to flex, adapt, and explore is key. The free energy principle: A model for system coherence Last year, we proposed the Free Energy Principle (FEP) as a framework for understanding adaptive system behaviour (Badcock et al, 2022). The FEP, originating in theoretical neuroscience (Friston, 2010), posits that syste ms maintain coherence by minimising surprise (or ‘free energy’) through learning and prediction (MIller etal, 2022). This implies several key design principles: • Distributed sense-making: Systems persist by dispersing learning and processing across multiple layers; • Predictive processing: Systems function best when they can anticipate, not merely react to, their environment. From this standpoint, centralised command-and-control is a poor fit for dynamic, real-world operations. The cognitive and informational load is too high, the pace of change too fast. Instead, distributed control and adaptive autonomy are essential. Designing for adaptive capacity If we apply this insight to organisational design, the implications are evident: • We must maintain both structure that resists upset (robustness) and structures that enable adaptation (resilience); • Learning must be both explicit (codified, documented) and implicit (tacit, emergent); • System design must intentionally maximise the flow of information and knowledge across formal and informal networks. Humans are naturally good at this. We are self-networking entities, capable of sharing knowledge in unexpected, generative ways – if we are given the conditions and psychological safety to do so. Rethinking Safety-II in light of complexity Safety-I has focused heavily on accident prevention through root cause analysis and engineered control – valuable for known, repeatable risks. It helps with robustness. Safety-II, on the other hand, encourages us to learn from what goes ‘well’ or ‘successfully’ (whatever these actually mean?) – understanding variability in performance and successful adaptations. It points toward resilience. But Safety-II has often stalled in practice. One reason may be the lack of a compelling conceptual models to ensure it can be translated into practice. A new framework: Constraints and patterning In complex systems, stability is not imposed – it is enacted. Patterns of behaviour emerge from the constraints imposed on actors, processes, and technologies. These constraints may be: • Fixed (e.g., gravity, regulation); • Enabling (e.g., the rules of a game); or • Emergent (e.g., social norms, cultural practices). To support resilience, we must shift from trying to optimise systems toward designing constraint architectures that enable both robustness and adaptation. This means designing for flexibility, adaptability, experimentation, and generative learning. Looking ahead: What we must do If we want Safety-II to become more than a marginal idea, we must: • Redesign our organisations to allow for distributed learning, emergent intelligence, and adaptive autonomy; • Recognise that most human knowledge is socially developed and embedded in informal practices, not just formal procedures; • This is not a rejection of Safety-I, but an expansion. Robustness and resilience are not opposites – they are complementary. Together, they create the coherent, adaptive systems we need.

The paradigm of controllability and beyond / Smoker, Anthony; Simone, Francesco; Burnell, James; Patriarca, Riccardo. - (2025). (Intervento presentato al convegno 7th International Workshop on Safety-II in Practice tenutosi a Delft).

The paradigm of controllability and beyond

Francesco Simone;Riccardo Patriarca
2025

Abstract

In safety-critical domains, the pursuit of “safe operations” has long been framed around the prevention of adverse events through, typically, risk management. However, such a narrow focus fails to account for the true goals of complex systems: to maintain coherent and sustained operations that continuously balance safety and productivity under conditions of uncertainty and performance variability, (e.g.) unexpected events. Performing resiliently is a key ability that socio-technical systems should possess. The emergence of unexpected events, driven by the inherent complexity of socio-technical systems, has challenged traditional safety science over time especially. This has led to a gradual shift away from a risk-focused perspective, toward a resilience-oriented one. While the risk-based approach emphasized the need for knowledge in order to exercise control, the resilience perspective recognizes the limits of what can be known, and instead favours the system's ability to adapt in the face of surprises. Accordingly, demonstrating adaptation requires not being prepared to what might happen, laying in a state of “ignorance”, where the unknown is – however – correctly managed. At the extreme, the system that is potentially the most capable of adapting, is the one with least knowledge. But does that make it the most resilient, too? The answer is likely no. Trade-offs are around the corner (Hollnagel, 2009). A certain level of control – and knowledge that comes with it – is indeed essential for managing the unknown, resiliently. One might therefore assume that the more a system is controlled – and the more it knows – the better it is prepared to the unknown, thereby favouring control over adaptation to ensure the resilient performance. We have hints this is not true at all. The RESIST Project During the RESIST project (RESilience Management for Industrial Systems Threats – cf. the acknowledgment section), the research team (here represented by F.S. and R.P.) carried out an experimental campaign involving Human-Hardware-in-the-Loop (HHIL) simulations (Nakhal Akel, et al., 2024). The HHIL approach combines real-world data from human operations and hardware functioning with a digital model of the complex system under analysis to ensure high-fidelity simulation results. The simulations involved human operators to detect failures and then restore operations on a mock-up plant mimicking an oil and gas extraction process. Different operator personas were tested, ranging from those working under tightly controlled conditions to those given greater freedom to adapt, depending on the level of practicability of the work conditions. A simple resilience measure computed over the simulations results shows how different personas performed worse (on average) and more variably at both extremes (Figure 1). While these early results are not yet statistically significant, the empirical observations suggested that controllability and adaptability cannot be pushed in isolation. Instead, the two “forces” must be balanced to place the system in a state with a greater potential for performing resiliently. If complex adaptive socio-technical systems must be capable of functioning safely, purposefully sustaining adaptive capacity, then we must ask: what kind of system design and organisational approach makes that possible in a complex, variable world? What are the implications for controllability and adaptability? A truly stable system must survive in a world full of surprises. Ashby’s Law of Requisite Variety states that to remain viable, a system must be able to respond to every type of perturbation it might face. In this sense, what is referred to is adaptability or adaptive capacity. Since the number of possible disruptions is practically infinite; no system can rely on predefined responses alone. Not all contingencies can be engineered in advance. Neither, significantly, is there unlimited or limitless adaptive capacity, The cost of coordination becomes increasingly limiting for example. We contend that two kinds of capacity are required: (i) designed control, and (ii) emergent creativity. In complexity terms, many desired states are “multiply realisable” – reachable through many pathways. But not all paths work every time. The ability to flex, adapt, and explore is key. The free energy principle: A model for system coherence Last year, we proposed the Free Energy Principle (FEP) as a framework for understanding adaptive system behaviour (Badcock et al, 2022). The FEP, originating in theoretical neuroscience (Friston, 2010), posits that syste ms maintain coherence by minimising surprise (or ‘free energy’) through learning and prediction (MIller etal, 2022). This implies several key design principles: • Distributed sense-making: Systems persist by dispersing learning and processing across multiple layers; • Predictive processing: Systems function best when they can anticipate, not merely react to, their environment. From this standpoint, centralised command-and-control is a poor fit for dynamic, real-world operations. The cognitive and informational load is too high, the pace of change too fast. Instead, distributed control and adaptive autonomy are essential. Designing for adaptive capacity If we apply this insight to organisational design, the implications are evident: • We must maintain both structure that resists upset (robustness) and structures that enable adaptation (resilience); • Learning must be both explicit (codified, documented) and implicit (tacit, emergent); • System design must intentionally maximise the flow of information and knowledge across formal and informal networks. Humans are naturally good at this. We are self-networking entities, capable of sharing knowledge in unexpected, generative ways – if we are given the conditions and psychological safety to do so. Rethinking Safety-II in light of complexity Safety-I has focused heavily on accident prevention through root cause analysis and engineered control – valuable for known, repeatable risks. It helps with robustness. Safety-II, on the other hand, encourages us to learn from what goes ‘well’ or ‘successfully’ (whatever these actually mean?) – understanding variability in performance and successful adaptations. It points toward resilience. But Safety-II has often stalled in practice. One reason may be the lack of a compelling conceptual models to ensure it can be translated into practice. A new framework: Constraints and patterning In complex systems, stability is not imposed – it is enacted. Patterns of behaviour emerge from the constraints imposed on actors, processes, and technologies. These constraints may be: • Fixed (e.g., gravity, regulation); • Enabling (e.g., the rules of a game); or • Emergent (e.g., social norms, cultural practices). To support resilience, we must shift from trying to optimise systems toward designing constraint architectures that enable both robustness and adaptation. This means designing for flexibility, adaptability, experimentation, and generative learning. Looking ahead: What we must do If we want Safety-II to become more than a marginal idea, we must: • Redesign our organisations to allow for distributed learning, emergent intelligence, and adaptive autonomy; • Recognise that most human knowledge is socially developed and embedded in informal practices, not just formal procedures; • This is not a rejection of Safety-I, but an expansion. Robustness and resilience are not opposites – they are complementary. Together, they create the coherent, adaptive systems we need.
2025
7th International Workshop on Safety-II in Practice
04 Pubblicazione in atti di convegno::04d Abstract in atti di convegno
The paradigm of controllability and beyond / Smoker, Anthony; Simone, Francesco; Burnell, James; Patriarca, Riccardo. - (2025). (Intervento presentato al convegno 7th International Workshop on Safety-II in Practice tenutosi a Delft).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1748747
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact