Generative machine unlearning has emerged as a critical requirement for the responsible deployment of text-to-image generative models, where the ability to erase specific visual concepts is essential for addressing concerns of privacy, copyright, and ethical use. Despite rapid progress in generative modeling, the field lacks standardized benchmarks to evaluate how effectively models can forget targeted concepts while retaining adjacent and unrelated knowledge. To fill this gap, we introduce the Gen$\mu$ benchmark, which provides an extensive dataset of target, retain, and adjacent concepts, coupled with carefully engineered and adversarial prompts designed to probe unlearning robustness. To ensure fair and comprehensive assessment, we utilize the Erasing-Retention-Robustness score, a unified metric for capturing erasing accuracy, retention accuracy, adjacent-concept preservation, engineered-prompt robustness, and adversarial robustness. Alongside this benchmark, we establish detailed baselines using widely adopted unlearning algorithms, demonstrating the strengths and limitations of current approaches. By consolidating tasks such as single-concept, multi-concept, and continuous unlearning in a unified framework, the Gen$\mu$ benchmark provides the first rigorous foundation for systematic evaluation in this domain. It aims to catalyze future research on controllable and responsible generative models that can selectively forget while preserving generality and robustness.

Genµ: The Generative Machine Unlearning Challenge / Thakral, Kartik; Pathak, Shreyansh; Glaser, Tamar; Hassner Diego, Tal; Garcia-Olano, ; Masi, Iacopo; Singh, Richa; Vatsa, Mayank. - (2025). (Intervento presentato al convegno International Conference on Computer Vision (ICCV) Workshops tenutosi a Hawaii).

Genµ: The Generative Machine Unlearning Challenge

Iacopo Masi;
2025

Abstract

Generative machine unlearning has emerged as a critical requirement for the responsible deployment of text-to-image generative models, where the ability to erase specific visual concepts is essential for addressing concerns of privacy, copyright, and ethical use. Despite rapid progress in generative modeling, the field lacks standardized benchmarks to evaluate how effectively models can forget targeted concepts while retaining adjacent and unrelated knowledge. To fill this gap, we introduce the Gen$\mu$ benchmark, which provides an extensive dataset of target, retain, and adjacent concepts, coupled with carefully engineered and adversarial prompts designed to probe unlearning robustness. To ensure fair and comprehensive assessment, we utilize the Erasing-Retention-Robustness score, a unified metric for capturing erasing accuracy, retention accuracy, adjacent-concept preservation, engineered-prompt robustness, and adversarial robustness. Alongside this benchmark, we establish detailed baselines using widely adopted unlearning algorithms, demonstrating the strengths and limitations of current approaches. By consolidating tasks such as single-concept, multi-concept, and continuous unlearning in a unified framework, the Gen$\mu$ benchmark provides the first rigorous foundation for systematic evaluation in this domain. It aims to catalyze future research on controllable and responsible generative models that can selectively forget while preserving generality and robustness.
2025
International Conference on Computer Vision (ICCV) Workshops
unlearning; generative AI
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Genµ: The Generative Machine Unlearning Challenge / Thakral, Kartik; Pathak, Shreyansh; Glaser, Tamar; Hassner Diego, Tal; Garcia-Olano, ; Masi, Iacopo; Singh, Richa; Vatsa, Mayank. - (2025). (Intervento presentato al convegno International Conference on Computer Vision (ICCV) Workshops tenutosi a Hawaii).
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1751321
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact