Self-Attention

Looking at the Performer from a Hopfield point of view

Blog post which analyzes the the Performer paper from a Hopfield point of view. Published as blog post at ICLR 2022.

Hopfield Networks is All You Need

We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new update rule is equivalent to the attention mechanism used in transformers. Published at ICLR 2021.

Modern hopfield networks and attention for immune repertoire classification

We exploit the storage capacity of modern Hopfield networks to solve a challenging multiple instance learning (MIL) problem in computational biology: immune repertoire classification. Published at NeurIPS 2020 (Spotlight).

General Deep Learning

After switching from High Energy Physics to Deep Learning, I started working in Reinforcement Learning before pivoting towards Associative Memories and modern Transformer networks. Recent years have shown that scalable ideas, improving the datasets, and clever engineering are the ingredients for ever better Deep Learning models. This totally coincides with my experience, and -- needless to say -- I will continue working on general large-scale Deep Learning directions.