Binary analysis has become essential for software inspection and security assessment. As the number of software-driven devices grows, research is shifting towards autonomous solutions using deep learning models. In this context, a hot topic is the binary similarity problem, which involves determining whether two assembly functions originate from the same source code. However, it is unclear how deep learning models for binary similarity behave in an adversarial context. In this paper, we study the resilience of binary similarity models against adversarial examples, showing that they are susceptible to both targeted and untargeted (w.r.t. similarity goals) attacks performed by black-box and white-box attackers. We extensively test three state-of-the-art binary similarity solutions against (i) a black-box greedy attack that we enrich with a new search heuristic, terming it Spatial Greedy , and (ii) a white-box attack in which we repurpose a gradient-guided strategy used in attacks to image classifiers. Interestingly, the target models are more susceptible to black-box attacks than white-box ones, exhibiting greater resilience in the case of targeted attacks.

Adversarial Attacks against Binary Similarity Systems / Capozzi, Gianluca; D'Elia, Daniele Cono; Luna, Giuseppe Antonio Di; Querzoni, Leonardo. - In: IEEE ACCESS. - ISSN 2169-3536. - 12:(2024), pp. 161247-161269. [10.1109/access.2024.3488204]

Adversarial Attacks against Binary Similarity Systems

Capozzi, Gianluca
Primo
;
D'Elia, Daniele Cono;Luna, Giuseppe Antonio Di;Querzoni, Leonardo
2024

Abstract

Binary analysis has become essential for software inspection and security assessment. As the number of software-driven devices grows, research is shifting towards autonomous solutions using deep learning models. In this context, a hot topic is the binary similarity problem, which involves determining whether two assembly functions originate from the same source code. However, it is unclear how deep learning models for binary similarity behave in an adversarial context. In this paper, we study the resilience of binary similarity models against adversarial examples, showing that they are susceptible to both targeted and untargeted (w.r.t. similarity goals) attacks performed by black-box and white-box attackers. We extensively test three state-of-the-art binary similarity solutions against (i) a black-box greedy attack that we enrich with a new search heuristic, terming it Spatial Greedy , and (ii) a white-box attack in which we repurpose a gradient-guided strategy used in attacks to image classifiers. Interestingly, the target models are more susceptible to black-box attacks than white-box ones, exhibiting greater resilience in the case of targeted attacks.
2024
Adversarial Attacks; Binary Analysis; Binary Code Models; Binary Similarity; Black-box Attacks; Greedy; White-box Attacks
01 Pubblicazione su rivista::01a Articolo in rivista
Adversarial Attacks against Binary Similarity Systems / Capozzi, Gianluca; D'Elia, Daniele Cono; Luna, Giuseppe Antonio Di; Querzoni, Leonardo. - In: IEEE ACCESS. - ISSN 2169-3536. - 12:(2024), pp. 161247-161269. [10.1109/access.2024.3488204]
File allegati a questo prodotto
File Dimensione Formato  
Capozzi_Adversarial_2024.pdf

accesso aperto

Note: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10738789
Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 6.33 MB
Formato Adobe PDF
6.33 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1724676
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact