We generalize graph neural network based simulations of Lagrangian dynamics to complex boundaries as encountered in daily life engineering setups. Published at AAAI 2023.
My passion for Geometric Deep Learning can be unmistakenly traced back to my physics background. I have contributed to the fields of graph neural networks, equivariant architectures, and neural PDE solvers. Furthermore, I have lead efforts to introduce Lie Point Symmetries, and, most recently, Clifford (Geometric) Algebras into the Deep Learning community.
We prove under commonly used assumptions the convergence of actor-critic reinforcement learning algorithms. Published at Transactions on Large-Scale Data-and Knowledge-Centered Systems XLVIII.
We generalise steerable E(3) equivariant graph neural networks such that node and edge updates are able to leverage covariant information. Published at ICLR 2022 (Oral).
We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new update rule is equivalent to the attention mechanism used in transformers. Published at ICLR 2021.
We exploit the storage capacity of modern Hopfield networks to solve a challenging multiple instance learning (MIL) problem in computational biology: immune repertoire classification. Published at NeurIPS 2020 (Spotlight).
After switching from High Energy Physics to Deep Learning, I started working in Reinforcement Learning before pivoting towards Associative Memories and modern Transformer networks. Recent years have shown that scalable ideas, improving the datasets, and clever engineering are the ingredients for ever better Deep Learning models. This totally coincides with my experience, and -- needless to say -- I will continue working on general large-scale Deep Learning directions.