Tuesday 7 October 2025, 16:00 - 17:00 in HG00.616
Devika Narain (Donders)
Unsupervised manifold learning using low-distortion Riemannian alignment of tangent spaces
Neuroscience is recently experiencing a significant shift in data analyses approaches due to the advent of new technology that enables the recording of hundreds of neurons simultaneously. When neural activity of a large ensemble of neurons is analyzed using linear dimensionality methods such as principal component analysis, unexpected structure and topology are revealed in latent representations of such data. Often these data topologies assume the form of nonlinear manifolds and variables of interest may be encoded along the intrinsic dimension of these structures, which can be accessed through manifold learning approaches. However, there is a common challenge that plagues most nonlinear manifold learning methods that seek to embed higher dimensional topologies in their lower intrinsic dimension – distortion. In the field of machine learning, there is no widely adopted mathematical definition of distortion and therefore, no manifold learning approach has been particularly attentive to the issue of minimizing it. Here, we propose a measure of distortion during manifold learning called global distortion and use it to quantify distortion in embeddings across different methods. Furthermore, we develop a new bottom-up manifold learning method called Riemannian Alignment of Tangent Spaces (RATS), which consistently provides embeddings that generate the lowest distortion among existing methods. I will demonstrate the potential of RATS on synthetic and biological datasets and discuss how insights from the mathematical community might improve such methodology in the future.