With the aim of tracing an evolutionary pathway from informed consent to a broader notion of “Trust-Based Consent”, this paper examines the limits of the current “expertocratic” framing of (Trust)worthiness in the AI Act. It asks whether, and to what extent, such a framework can genuinely guarantee, rather than merely “communicate”, the normative foundations of a human-centric approach to AI. At its core lies the principle of human decision-making sovereignty, understood also as the autonomy of Human Recipients (HRs) to take part in decisions on the concrete admissibility of risks to fundamental rights posed by the AI systems with which they are expected, willingly or not, to interact, as well as by the associated computational configurations. Through a parallel analysis of the ex post safeguards under the GDPR and the critical issues of the GPAI Code of Practice drafting process, the paper argues that de-naturalizing the view of AI as a product is crucial to enabling an alternative governance model, grounded in democratic participation of HRs. Such a model should reclaim trust as a bidirectional and iterative process, linking the design and implementation of AI systems to the specific norms, values, and standards of host communities across the entire AI value chain.
Dal principio del consenso informato al trust-based consent, una possibile traiettoria evolutiva / De Vivo, Isabella. - In: MEDIA LAWS. - ISSN 2532-9146. - 1/2025(2025), pp. 127-161.
Dal principio del consenso informato al trust-based consent, una possibile traiettoria evolutiva
Isabella de Vivo
Primo
Writing – Original Draft Preparation
2025
Abstract
With the aim of tracing an evolutionary pathway from informed consent to a broader notion of “Trust-Based Consent”, this paper examines the limits of the current “expertocratic” framing of (Trust)worthiness in the AI Act. It asks whether, and to what extent, such a framework can genuinely guarantee, rather than merely “communicate”, the normative foundations of a human-centric approach to AI. At its core lies the principle of human decision-making sovereignty, understood also as the autonomy of Human Recipients (HRs) to take part in decisions on the concrete admissibility of risks to fundamental rights posed by the AI systems with which they are expected, willingly or not, to interact, as well as by the associated computational configurations. Through a parallel analysis of the ex post safeguards under the GDPR and the critical issues of the GPAI Code of Practice drafting process, the paper argues that de-naturalizing the view of AI as a product is crucial to enabling an alternative governance model, grounded in democratic participation of HRs. Such a model should reclaim trust as a bidirectional and iterative process, linking the design and implementation of AI systems to the specific norms, values, and standards of host communities across the entire AI value chain.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


