1

Clifford Group Equivariant Neural Networks

We introduce a novel method to construct E(n)- and O(n)-equivariant neural networks using Clifford algebras. Published at NeurIPS 2023 (Oral).

Geometric Clifford Algebra Networks

We introduce Geometric Clifford Algebra Networks (GCANs) which parameterize combinations of learnable group actions. Published at ICML 2023.

Clifford Neural Layers for PDE Modeling

We introduce neural network layers based on operations on composite objects of scalars, vectors, and higher order objects such as bivectors. Published at ICLR 2023.

Lie Point Symmetry Data Augmentation for Neural PDE Solvers

We present how to use Lie Point Symmetries of PDEs to improve sample complexity of neural PDE solvers. Published at ICML 2022 (Spotlight).

Message Passing Neural PDE Solvers

In this work, we introduce a message passing neural PDE solver that replaces all heuristically designed components in numerical PDE solvers with backprop-optimized neural function approximators. Published at ICLR 2022 (Spotlight).

Geometric and Physical Quantities Improve E(3) Equivariant Message Passing

We generalise steerable E(3) equivariant graph neural networks such that node and edge updates are able to leverage covariant information. Published at ICLR 2022 (Spotlight).

Boundary Graph Neural Networks for 3D Simulations

We generalize graph neural network based simulations of Lagrangian dynamics to complex boundaries as encountered in daily life engineering setups. Published at AAAI 2023.

Looking at the Performer from a Hopfield point of view

Blog post which analyzes the the Performer paper from a Hopfield point of view. Published as blog post at ICLR 2022.

Align-RUDDER -- Learning From Few Demonstrations by Reward Redistribution

We generalise steerable E(3) equivariant graph neural networks such that node and edge updates are able to leverage covariant information. Published at ICLR 2022 (Oral).

Hopfield Networks is All You Need

We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new update rule is equivalent to the attention mechanism used in transformers. Published at ICLR 2021.