"The future is already here  —  it's just not very evenly distributed"

- William Gibson -

I am interested in the safe and reliable integration of machine learning methods into critical systems (read: systems with low margin of error). I primarily pursue this interest through the topics of data fusion and transportability. Loosely, this is motivated by the observation that humans can:

(1) seamlessly fuse signals from various sources, then;
(2) quickly select the most relevant subsets for any given task. 

My past research has predominantly focused on unsupervised (deep) generative modeling with a special interest in methods exploring (non-trivial) manifold learning.

Research

T.R. Davidson*, V. Veselovsky*, M. Josifoski, M. Peyrard, A. Bosselut, M. Kosinski, R. West, Evaluating Language Model Agency through Negotiations, (ICLR, 2024)
[arXiv] [code] [blog] [data]

T.R. Davidson*, L. Falorsi*, N. De Cao*, T. Kipf, J.M. Tomczak, Hyperspherical Variational Autoencoders, Oral presentation (UAI, 2018)
[arXiv] [code] [oral]

L. Falorsi*, P. de Haan*, T.R. Davidson*, N. De Cao, M. Weiler, P. Forré, T. Cohen, Explorations in Homeomorphic Variational Auto-Encoding, ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models (ICML, 2018)
[arXiv] [code]

L. Falorsi, P. de Haan, T.R. Davidson, P. Forré, Reparameterizing Distributions on Lie Groups, Oral presentation (AISTATS, 2019)
[arXiv] [code] [slides]

T.R. Davidson, J.M. Tomczak, E. Gavves, Increasing Expressivity of a Hyperspherical VAE, NeurIPS Workshop on Bayesian Deep Learning (NeurIPS, 2019)
[arXiv] [code]

T.R. Davidson, The Shape of a Black Box: A Closer Look at Structured Latent Spaces, Master's Thesis, University of Amsterdam, 2019
[pdf]