"The future is already here — it's just not very evenly distributed"

- William Gibson -

Research I am fascinated by the question: what makes a "good" representation? Because machines don't have any of the incredible evolutionary gifts we developed over millions of years, we have to find ways to somehow short-circuit that learning process. My research thus far has pre-dominantly focused on unsupervised (deep) machine learning, with special interest in methods exploring (non-trivial) manifold learning.


T.R. Davidson*, L. Falorsi*, N. De Cao*, T. Kipf, J.M. Tomczak, Hyperspherical Variational Autoencoders, Oral presentation at the 34th Conference on Uncertainty in Artificial Intelligence (UAI, 2018)
[arXiv] [code] [oral]

L. Falorsi*, P. de Haan*, T.R. Davidson*, N. De Cao, M. Weiler, P. Forré, T. Cohen, Explorations in Homeomorphic Variational Auto-Encoding, ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models (ICML, 2018)
[arXiv] [code]

L. Falorsi, P. de Haan, T.R. Davidson, P. Forré, Reparameterizing Distributions on Lie Groups, Oral presentation at the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS, 2019)
[arXiv] [code]

T.R. Davidson, J.M. Tomczak, E. Gavves, Increasing Expressivity of a Hyperspherical VAE, NeurIPS Workshop on Bayesian Deep Learning (NeurIPS, 2019)

T.R. Davidson, The Shape of a Black Box: A Closer Look at Structured Latent Spaces, Master's Thesis, University of Amsterdam, 2019