Ensembles of deep learning models can be used for estimating predictive uncertainty. Existing ensemble approaches, however, introduce a high computational and memory cost limiting their applicability to real-time biosignal applications (e.g. ECG, EEG). To address these issues, we propose early exit ensembles (EEEs) for estimating predictive uncertainty via an implicit ensemble of early exits. In particular, EEEs are a collection of weight sharing sub-networks created by adding exit branches to any backbone neural network architecture. Empirical evaluation of EEEs demonstrates strong performance in accuracy and uncertainty metrics as well as computation gain highlighting the benefit of combining multiple structurally diverse models that can be jointly trained. Compared to state-of-the-art baselines (with an ensemble size of 5), EEEs can improve uncertainty metrics up to 2× while providing test-time speed-up and memory reduction of approx. 5×. Additionally, EEEs can improve accuracy up to 3.8 percentage points compared to single model baselines.
Robust and Efficient Uncertainty Aware Biosignal Classification via Early Exit Ensembles / Campbell, A.; Qendro, L.; Lio, P.; Mascolo, C.. - 2022-:(2022), pp. 3998-4002. (Intervento presentato al convegno 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 tenutosi a Virtual, Online) [10.1109/ICASSP43922.2022.9746330].
Robust and Efficient Uncertainty Aware Biosignal Classification via Early Exit Ensembles
Lio P.;
2022
Abstract
Ensembles of deep learning models can be used for estimating predictive uncertainty. Existing ensemble approaches, however, introduce a high computational and memory cost limiting their applicability to real-time biosignal applications (e.g. ECG, EEG). To address these issues, we propose early exit ensembles (EEEs) for estimating predictive uncertainty via an implicit ensemble of early exits. In particular, EEEs are a collection of weight sharing sub-networks created by adding exit branches to any backbone neural network architecture. Empirical evaluation of EEEs demonstrates strong performance in accuracy and uncertainty metrics as well as computation gain highlighting the benefit of combining multiple structurally diverse models that can be jointly trained. Compared to state-of-the-art baselines (with an ensemble size of 5), EEEs can improve uncertainty metrics up to 2× while providing test-time speed-up and memory reduction of approx. 5×. Additionally, EEEs can improve accuracy up to 3.8 percentage points compared to single model baselines.File | Dimensione | Formato | |
---|---|---|---|
Campbell_Robust_2022.pdf
solo gestori archivio
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
7.17 MB
Formato
Adobe PDF
|
7.17 MB | Adobe PDF | Contatta l'autore |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.