Binary function similarity, which often relies on learning-based algorithms to identify what functions in a pool are most similar to a given query function, is a sought-after topic in different communities, including machine learning, software engineering, and security. Its importance stems from the impact it has in facilitating several crucial tasks, from reverse engineering and malware analysis to automated vulnerability detection. Whereas recent work cast light around performance on this long-studied problem, the research landscape remains largely lackluster in understanding the resiliency of the state-of-the-art machine learning models against adversarial attacks. As security requires to reason about adversaries, in this work we assess the robustness of such models through a simple yet effective black-box greedy attack, which modifies the topology and the content of the control flow of the attacked functions. We demonstrate that this attack is successful in compromising all the models, achieving average attack success rates of 57.06% and 95.81% depending on the problem settings (targeted and untargeted attacks). Our findings are insightful: top performance on clean data does not necessarily relate to top robustness properties, which explicitly highlights performance-robustness trade-offs one should consider when deploying such models, calling for further research.
On the Lack of Robustness of Binary Function Similarity Systems / Capozzi, Gianluca; Tang, Tong; Wan, Jie; Yang, Ziqi; D'Elia, Daniele Cono; Di Luna, Giuseppe Antonio; Cavallaro, Lorenzo; Querzoni, Leonardo. - (2025), pp. 980-1001. ( 10th IEEE European Symposium on Security and Privacy, Euro S and P 2025 Venezia ) [10.1109/EuroSP63326.2025.00060].
On the Lack of Robustness of Binary Function Similarity Systems
Gianluca Capozzi
;Daniele Cono D’Elia;Giuseppe Antonio Di LunaSupervision
;Leonardo Querzoni
2025
Abstract
Binary function similarity, which often relies on learning-based algorithms to identify what functions in a pool are most similar to a given query function, is a sought-after topic in different communities, including machine learning, software engineering, and security. Its importance stems from the impact it has in facilitating several crucial tasks, from reverse engineering and malware analysis to automated vulnerability detection. Whereas recent work cast light around performance on this long-studied problem, the research landscape remains largely lackluster in understanding the resiliency of the state-of-the-art machine learning models against adversarial attacks. As security requires to reason about adversaries, in this work we assess the robustness of such models through a simple yet effective black-box greedy attack, which modifies the topology and the content of the control flow of the attacked functions. We demonstrate that this attack is successful in compromising all the models, achieving average attack success rates of 57.06% and 95.81% depending on the problem settings (targeted and untargeted attacks). Our findings are insightful: top performance on clean data does not necessarily relate to top robustness properties, which explicitly highlights performance-robustness trade-offs one should consider when deploying such models, calling for further research.| File | Dimensione | Formato | |
|---|---|---|---|
|
Capozzi_preprint_On-the-Lack_2025.pdf
accesso aperto
Note: DOI: 10.1109/EuroSP63326.2025.00060
Tipologia:
Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.17 MB
Formato
Adobe PDF
|
1.17 MB | Adobe PDF | |
|
Capozzi_On-the-Lack_2025.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
818.62 kB
Formato
Adobe PDF
|
818.62 kB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


