Deep Learning

Boundary Graph Neural Networks for 3D Simulations

We generalize graph neural network based simulations of Lagrangian dynamics to complex boundaries as encountered in daily life engineering setups. Published at AAAI 2023.

Geometric Deep Learning

My passion for Geometric Deep Learning can be unmistakenly traced back to my physics background. I have contributed to the fields of graph neural networks, equivariant architectures, and neural PDE solvers. Furthermore, I have lead efforts to introduce Lie Point Symmetries, and, most recently, Clifford (Geometric) Algebras into the Deep Learning community.

Looking at the Performer from a Hopfield point of view

Blog post which analyzes the the Performer paper from a Hopfield point of view. Published as blog post at ICLR 2022.

Convergence proof for actor-critic methods applied to ppo and rudder

We prove under commonly used assumptions the convergence of actor-critic reinforcement learning algorithms. Published at Transactions on Large-Scale Data-and Knowledge-Centered Systems XLVIII.

Align-RUDDER -- Learning From Few Demonstrations by Reward Redistribution

We generalise steerable E(3) equivariant graph neural networks such that node and edge updates are able to leverage covariant information. Published at ICLR 2022 (Oral).

Hopfield Networks is All You Need

We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new update rule is equivalent to the attention mechanism used in transformers. Published at ICLR 2021.

Modern hopfield networks and attention for immune repertoire classification

We exploit the storage capacity of modern Hopfield networks to solve a challenging multiple instance learning (MIL) problem in computational biology: immune repertoire classification. Published at NeurIPS 2020 (Spotlight).

General Deep Learning

After switching from High Energy Physics to Deep Learning, I started working in Reinforcement Learning before pivoting towards Associative Memories and modern Transformer networks. Recent years have shown that scalable ideas, improving the datasets, and clever engineering are the ingredients for ever better Deep Learning models. This totally coincides with my experience, and -- needless to say -- I will continue working on general large-scale Deep Learning directions.

RUDDER -- Return Decomposition for Delayed Rewards

We propose RUDDER, a novel reinforcement learning approach for delayed rewards in finite Markov decision processes. Published at NeurIPS 2019.