We introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles.
PDE-Refiner is an iterative refinement process that enables neural operator training for accurate and stable predictions over long time horizons. Published at NeurIPS 2023 (Spotlight).
We present PDEArena, a modern PyTorch Lightning-based deep learning framework for neural PDE modeling. Published at TMLR 07/2023.
We introduce neural network layers based on operations on composite objects of scalars, vectors, and higher order objects such as bivectors. Published at ICLR 2023.
We present how to use Lie Point Symmetries of PDEs to improve sample complexity of neural PDE solvers. Published at ICML 2022 (Spotlight).