The binary similarity problem consists in determining if two functions are similar by only considering their compiled form. Techniques for binary similarity have an immediate practical impact on several fields such as copyright disputes, malware analysis, vulnerability detection, etc. Current solutions compare functions by first transforming their binary code in multi-dimensional vector representations (embeddings), and then comparing vectors through simple and efficient geometric operations. In this paper we propose SAFE, a novel architecture for the embedding of functions based on a self-attentive neural network. SAFE works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions, and is more general as it works on stripped binaries and on multiple architectures. We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions. Furthermore, we show how clusters of our embedding vectors are closely related to the semantic of the implemented algorithms, paving the way for further interesting applications.
SAFE: Self-Attentive Function Embeddings for Binary Similarity / Massarelli, Luca; Di Luna, Giuseppe Antonio; Petroni, Fabio; Baldoni, Roberto; Querzoni, Leonardo. - 11543:(2019), pp. 309-329. (Intervento presentato al convegno 16th International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment DIMVA 2019 tenutosi a Gothenburg; Sweden) [10.1007/978-3-030-22038-9_15].
SAFE: Self-Attentive Function Embeddings for Binary Similarity
MASSARELLI, LUCA
;Di Luna, Giuseppe Antonio;Petroni, Fabio;Baldoni, Roberto;Querzoni, Leonardo
2019
Abstract
The binary similarity problem consists in determining if two functions are similar by only considering their compiled form. Techniques for binary similarity have an immediate practical impact on several fields such as copyright disputes, malware analysis, vulnerability detection, etc. Current solutions compare functions by first transforming their binary code in multi-dimensional vector representations (embeddings), and then comparing vectors through simple and efficient geometric operations. In this paper we propose SAFE, a novel architecture for the embedding of functions based on a self-attentive neural network. SAFE works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions, and is more general as it works on stripped binaries and on multiple architectures. We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions. Furthermore, we show how clusters of our embedding vectors are closely related to the semantic of the implemented algorithms, paving the way for further interesting applications.File | Dimensione | Formato | |
---|---|---|---|
Massarelli_Postprint_SAFE_2019.pdf
accesso aperto
Note: https://link.springer.com/chapter/10.1007/978-3-030-22038-9_15
Tipologia:
Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
3.1 MB
Formato
Adobe PDF
|
3.1 MB | Adobe PDF | |
Massarelli_SAFE_2019.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.08 MB
Formato
Adobe PDF
|
1.08 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.