Graphs have a superior ability to represent relational data, such as chemical compounds, proteins, and social networks. Hence, graph-level learning, which takes a set of graphs as input, has been applied to many tasks, including comparison, regression, classification, and more. Traditional approaches to learning a set of graphs heavily rely on hand-crafted features, such as substructures. While these methods benefit from good interpretability, they often suffer from computational bottlenecks, as they cannot skirt the graph isomorphism problem. Conversely, deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations. As a result, these deep graph learning methods have been responsible for many successes. Yet, no comprehensive survey reviews graph-level learning starting with traditional learning and moving through to the deep learning approaches. This article fills this gap and frames the representative algorithms into a systematic taxonomy covering traditional learning, graph-level deep neural networks, graph-level graph neural networks, and graph pooling. In addition, the evolution and interaction between methods from these four branches within their developments are examined to provide an in-depth analysis. This is followed by a brief review of the benchmark datasets, evaluation metrics, and common downstream applications. Finally, the survey concludes with an in-depth discussion of 12 current and future directions in this booming field.

State of the Art and Potentialities of Graph-level Learning / Yang, Z.; Zhang, G.; Wu, J.; Yang, J.; Sheng, Q. Z.; Xue, S.; Zhou, C.; Aggarwal, C.; Peng, H.; Hu, W.; Hancock, E.; Lio, P.. - In: ACM COMPUTING SURVEYS. - ISSN 0360-0300. - 57:2(2024). [10.1145/3695863]

State of the Art and Potentialities of Graph-level Learning

Lio P.
2024

Abstract

Graphs have a superior ability to represent relational data, such as chemical compounds, proteins, and social networks. Hence, graph-level learning, which takes a set of graphs as input, has been applied to many tasks, including comparison, regression, classification, and more. Traditional approaches to learning a set of graphs heavily rely on hand-crafted features, such as substructures. While these methods benefit from good interpretability, they often suffer from computational bottlenecks, as they cannot skirt the graph isomorphism problem. Conversely, deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations. As a result, these deep graph learning methods have been responsible for many successes. Yet, no comprehensive survey reviews graph-level learning starting with traditional learning and moving through to the deep learning approaches. This article fills this gap and frames the representative algorithms into a systematic taxonomy covering traditional learning, graph-level deep neural networks, graph-level graph neural networks, and graph pooling. In addition, the evolution and interaction between methods from these four branches within their developments are examined to provide an in-depth analysis. This is followed by a brief review of the benchmark datasets, evaluation metrics, and common downstream applications. Finally, the survey concludes with an in-depth discussion of 12 current and future directions in this booming field.
2024
deep Learning; graph datasets; graph neural networks; graph pooling; Graph-level learning
01 Pubblicazione su rivista::01a Articolo in rivista
State of the Art and Potentialities of Graph-level Learning / Yang, Z.; Zhang, G.; Wu, J.; Yang, J.; Sheng, Q. Z.; Xue, S.; Zhou, C.; Aggarwal, C.; Peng, H.; Hu, W.; Hancock, E.; Lio, P.. - In: ACM COMPUTING SURVEYS. - ISSN 0360-0300. - 57:2(2024). [10.1145/3695863]
File allegati a questo prodotto
File Dimensione Formato  
Yang_State_2024.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.21 MB
Formato Adobe PDF
3.21 MB Adobe PDF   Contatta l'autore
Yang_preprint_State_2024.pdf

accesso aperto

Note: https://dl.acm.org/doi/10.1145/3695863
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Creative commons
Dimensione 3.1 MB
Formato Adobe PDF
3.1 MB Adobe PDF

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1728752
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact