1

Vision-LSTM -- xLSTM as Generic Vision Backbone

We introduce Vision-LSTM (ViL), an adaption of the xLSTM building blocks to computer vision.

xLSTM -- Extended Long Short-Term Memory

How far do we get in language modeling when scaling LSTMs to billions of parameters, leveraging the latest techniques from modern LLMs, but mitigating known limitations of LSTMs?

Geometry-Informed Neural Networks

We introduce geometry-informed neural networks (GINNs) to train shape generative models without any data.

Universal Physics Transformers -- A Framework For Efficiently Scaling Neural Operators

We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles.

Mim-refiner -- A contrastive learning boost from intermediate pre-trained representations

We introduce MIM (Masked Image Modeling)-Refiner, a contrastive learning boost for pre-trained MIM models.

We identify particle clustering originating from tensile instabilities as one of the primary pitfalls. Based on these insights, we enhance both training and rollout inference of GNN-based simulators with varying components from standard SPH solvers, including pressure, viscous, and external force components.

Smoothed particle hydrodynamics (SPH) is omnipresent in modern engineering and scientific disciplines. SPH is a class of Lagrangian schemes that discretize fluid dynamics via finite material points that are tracked through the evolving velocity …

Lie Point Symmetries and Physics-Informed Networks

We present how to use Lie Point Symmetries of PDEs to improve physics-informed neural networks. Published at NeurIPS 2023.

PDE-Refiner - Achieving Accurate Long Rollouts with Neural PDE Solvers

PDE-Refiner is an iterative refinement process that enables neural operator training for accurate and stable predictions over long time horizons. Published at NeurIPS 2023 (Spotlight).

Learning Lagrangian Fluid Mechanics with E(3)-Equivariant Graph Neural Networks

We introduce E(3)-equivariant GNNs to two well-studied fluid-flow systems, namely 3D decaying Taylor-Green vortex and 3D reverse Poiseuille flow. Published at GSI 2023.

Learning Lagrangian Fluid Mechanics with E(3)-Equivariant Graph Neural Networks

We introduce E(3)-equivariant GNNs to two well-studied fluid-flow systems, namely 3D decaying Taylor-Green vortex and 3D reverse Poiseuille flow. Published at GSI 2023.