Binary analysis has become essential for software inspection and security assessment. As the number of software-driven devices grows, research is shifting towards autonomous solutions using deep learning models. In this context, a hot topic is the binary similarity problem, which involves determining whether two assembly functions originate from the same source code. However, it is unclear how deep learning models for binary similarity behave in an adversarial context. In this paper, we study the resilience of binary similarity models against adversarial examples, showing that they are susceptible to both targeted and untargeted (w.r.t. similarity goals) attacks performed by black-box and white-box attackers. We extensively test three state-of-the-art binary similarity solutions against (i) a black-box greedy attack that we enrich with a new search heuristic, terming it Spatial Greedy , and (ii) a white-box attack in which we repurpose a gradient-guided strategy used in attacks to image classifiers. Interestingly, the target models are more susceptible to black-box attacks than white-box ones, exhibiting greater resilience in the case of targeted attacks.
Adversarial Attacks against Binary Similarity Systems / Capozzi, Gianluca; D'Elia, Daniele Cono; Luna, Giuseppe Antonio Di; Querzoni, Leonardo. - In: IEEE ACCESS. - ISSN 2169-3536. - 12:(2024), pp. 161247-161269. [10.1109/access.2024.3488204]
Adversarial Attacks against Binary Similarity Systems
Capozzi, Gianluca
Primo
;D'Elia, Daniele Cono;Luna, Giuseppe Antonio Di;Querzoni, Leonardo
2024
Abstract
Binary analysis has become essential for software inspection and security assessment. As the number of software-driven devices grows, research is shifting towards autonomous solutions using deep learning models. In this context, a hot topic is the binary similarity problem, which involves determining whether two assembly functions originate from the same source code. However, it is unclear how deep learning models for binary similarity behave in an adversarial context. In this paper, we study the resilience of binary similarity models against adversarial examples, showing that they are susceptible to both targeted and untargeted (w.r.t. similarity goals) attacks performed by black-box and white-box attackers. We extensively test three state-of-the-art binary similarity solutions against (i) a black-box greedy attack that we enrich with a new search heuristic, terming it Spatial Greedy , and (ii) a white-box attack in which we repurpose a gradient-guided strategy used in attacks to image classifiers. Interestingly, the target models are more susceptible to black-box attacks than white-box ones, exhibiting greater resilience in the case of targeted attacks.File | Dimensione | Formato | |
---|---|---|---|
Capozzi_Adversarial_2024.pdf
accesso aperto
Note: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10738789
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
6.33 MB
Formato
Adobe PDF
|
6.33 MB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.