In this study, we introduce SAVAGE, a novel framework for sparse vicious adversarial link prediction attacks in Graph Neural Networks (GNNs). While GNNs have been successful in link prediction tasks, they are susceptible to adversarial attacks where malicious nodes attempt to manipulate recommendations for a target victim. SAVAGE optimizes the attacker's goal to maximize attack effectiveness while minimizing the required malicious resources. Unlike existing methods with static resource-based upper bounds, SAVAGE employs a sparsity enforcing mechanism to reduce the number of malicious nodes needed for the attack. Extensive experiments on real-world and synthetic datasets demonstrate the optimal trade-off achieved by SAVAGE between a high attack success rate and the number of malicious nodes utilized. Furthermore, we demonstrate that SAVAGE can successfully target non-GNN-based link prediction systems, even those unknown at the time of the attack. This showcases the transferability of SAVAGE-generated attacks to other black-box methods for link prediction, highlighting its applicability across different real-world scenarios.
Sparse Vicious Attacks on Graph Neural Networks / Trappolini, Giovanni; Maiorca, Valentino; Severino, Silvio; Rodola, Emanuele; Silvestri, Fabrizio; Tolomei, Gabriele. - In: IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE. - ISSN 2691-4581. - 5:5(2024), pp. 2293-2303. [10.1109/TAI.2023.3319306]
Sparse Vicious Attacks on Graph Neural Networks
Trappolini, Giovanni
;Maiorca, Valentino
;Severino, Silvio
;Rodola, Emanuele
;Silvestri, Fabrizio
;Tolomei, Gabriele
2024
Abstract
In this study, we introduce SAVAGE, a novel framework for sparse vicious adversarial link prediction attacks in Graph Neural Networks (GNNs). While GNNs have been successful in link prediction tasks, they are susceptible to adversarial attacks where malicious nodes attempt to manipulate recommendations for a target victim. SAVAGE optimizes the attacker's goal to maximize attack effectiveness while minimizing the required malicious resources. Unlike existing methods with static resource-based upper bounds, SAVAGE employs a sparsity enforcing mechanism to reduce the number of malicious nodes needed for the attack. Extensive experiments on real-world and synthetic datasets demonstrate the optimal trade-off achieved by SAVAGE between a high attack success rate and the number of malicious nodes utilized. Furthermore, we demonstrate that SAVAGE can successfully target non-GNN-based link prediction systems, even those unknown at the time of the attack. This showcases the transferability of SAVAGE-generated attacks to other black-box methods for link prediction, highlighting its applicability across different real-world scenarios.File | Dimensione | Formato | |
---|---|---|---|
Trappolini_preprint_Sparse_2022.pdf
accesso aperto
Note: DOI: 10.1109/TAI.2023.3319306
Tipologia:
Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
412.99 kB
Formato
Adobe PDF
|
412.99 kB | Adobe PDF | |
Trappolini_Sparse_2024.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.41 MB
Formato
Adobe PDF
|
1.41 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.