In the current scenario, where cyber-incidents occur daily, an effective Incident Management Process (IMP) and its assessment have assumed paramount significance. While assessment models, which evaluate the risks of incidents, exist to aid security experts during such a process, most of them provide only qualitative evaluations and are typically validated in individual case studies, predominantly utilizing non-public data. This hinders their comparative quantitative analysis, incapacitating the evaluation of new proposed solutions and the applicability of the existing ones due to the lack of baselines. To address this challenge, we contribute a benchmarking approach and system, BenchIMP, to support the quantitative evaluation of IMP assessment models based on performance and robustness in the same settings, thus enabling meaningful comparisons. The resulting benchmark is the first one tailored for evaluating process-based security assessment models and we demonstrate its capabilities through two case studies using real IMP data and state-of-the-art assessment models. We publicly release the benchmark to help the cybersecurity community ease quantitative and more accurate evaluations of IMP assessment models.
BenchIMP: A Benchmark for Quantitative Evaluation of the Incident Management Process Assessment / Palma, Alessandro; Bartoloni, Nicola; Angelini, Marco. - (2024). (Intervento presentato al convegno International Conference on Availability, Reliability and Security tenutosi a Vienna; Austria) [10.1145/3664476.3664504].
BenchIMP: A Benchmark for Quantitative Evaluation of the Incident Management Process Assessment
Alessandro Palma
Primo
;Marco Angelini
2024
Abstract
In the current scenario, where cyber-incidents occur daily, an effective Incident Management Process (IMP) and its assessment have assumed paramount significance. While assessment models, which evaluate the risks of incidents, exist to aid security experts during such a process, most of them provide only qualitative evaluations and are typically validated in individual case studies, predominantly utilizing non-public data. This hinders their comparative quantitative analysis, incapacitating the evaluation of new proposed solutions and the applicability of the existing ones due to the lack of baselines. To address this challenge, we contribute a benchmarking approach and system, BenchIMP, to support the quantitative evaluation of IMP assessment models based on performance and robustness in the same settings, thus enabling meaningful comparisons. The resulting benchmark is the first one tailored for evaluating process-based security assessment models and we demonstrate its capabilities through two case studies using real IMP data and state-of-the-art assessment models. We publicly release the benchmark to help the cybersecurity community ease quantitative and more accurate evaluations of IMP assessment models.File | Dimensione | Formato | |
---|---|---|---|
Palma_BenchIMP_2024.pdf
accesso aperto
Note: https://doi.org/10.1145/3664476.3664504
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Creative commons
Dimensione
846.62 kB
Formato
Adobe PDF
|
846.62 kB | Adobe PDF |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.