Electroencephalographic (EEG) decoding relies heavily on second-order (covariance) structure that lives on the manifold of symmetric positive-definite (SPD) matrices. Conventional deep networks in Euclidean space ignore this geometry, distorting geodesic relations between covariances; classical Riemannian pipelines respect SPD metrics but typically use fixed projections and a single global tangent embedding, which limits task adaptivity and incurs cubic costs in the channel dimension. We propose a fully geometry-consistent architecture that preserves manifold structure end-to-end while remaining trainable at scale. A compact depthwise-separable convolutional neural network (CNN) produces features whose regularized covariances lie on the SPD manifold. A learnable orthonormal projection, optimized on the Stiefel manifold via Riemannian stochastic gradient descent (SGD) with QR-factorization (QR) retraction, reduces dimensionality without breaking positive-definiteness and preserves an eigenvalue floor. We then perform tangent space graph-SPD aggregation on a scalp k -nearest-neighbor graph—neighbor covariances are transported to the reference tangent space, attention-averaged, and mapped back via the exponential—followed by a log-Euclidean mapping and linear softmax classification. This Stiefel→Graph-SPD→log chain explains why full geometric consistency matters: it avoids Euclidean shortcuts, keeps all intermediates SPD, and makes log/exp costs cubic in the reduced rank d. In cross-subject evaluation on three public datasets, the model attains 83.2 .5y.7% accuracy with improved macro-F1, strong separability (macro-AUROC ≈ 0.90), and well-calibrated probabilities (ECE ≤ 0.04), outperforming strong Euclidean CNNs and Riemannian baselines while remaining computationally pragmatic.
Stiefel-SPD Manifold Graph Convolution for End-to-End EEG Learning / Tibermacine, I. E.; Russo, S.; Napoli, C.. - In: IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING. - ISSN 1534-4320. - 34:(2026), pp. 595-606. [10.1109/TNSRE.2026.3652858]
Stiefel-SPD Manifold Graph Convolution for End-to-End EEG Learning
Tibermacine I. E.;Russo S.;Napoli C.
2026
Abstract
Electroencephalographic (EEG) decoding relies heavily on second-order (covariance) structure that lives on the manifold of symmetric positive-definite (SPD) matrices. Conventional deep networks in Euclidean space ignore this geometry, distorting geodesic relations between covariances; classical Riemannian pipelines respect SPD metrics but typically use fixed projections and a single global tangent embedding, which limits task adaptivity and incurs cubic costs in the channel dimension. We propose a fully geometry-consistent architecture that preserves manifold structure end-to-end while remaining trainable at scale. A compact depthwise-separable convolutional neural network (CNN) produces features whose regularized covariances lie on the SPD manifold. A learnable orthonormal projection, optimized on the Stiefel manifold via Riemannian stochastic gradient descent (SGD) with QR-factorization (QR) retraction, reduces dimensionality without breaking positive-definiteness and preserves an eigenvalue floor. We then perform tangent space graph-SPD aggregation on a scalp k -nearest-neighbor graph—neighbor covariances are transported to the reference tangent space, attention-averaged, and mapped back via the exponential—followed by a log-Euclidean mapping and linear softmax classification. This Stiefel→Graph-SPD→log chain explains why full geometric consistency matters: it avoids Euclidean shortcuts, keeps all intermediates SPD, and makes log/exp costs cubic in the reduced rank d. In cross-subject evaluation on three public datasets, the model attains 83.2 .5y.7% accuracy with improved macro-F1, strong separability (macro-AUROC ≈ 0.90), and well-calibrated probabilities (ECE ≤ 0.04), outperforming strong Euclidean CNNs and Riemannian baselines while remaining computationally pragmatic.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


