animation
HOI-Diff: Text-Driven Synthesis of 3D Human-Object Interactions using Diffusion Models, arXiv 2023
Official Implementation of the Paper: Controllable Human-Object Interaction Synthesis (ECCV 2024 Oral)
Official implementation of TeSMo, a method for text-controlled scene-aware motion generation, from the ECCV 2024 paper: "Generating Human Interaction Motions in Scenes with Text Control".
This repostory contains code and data instructions for ROAM, 3DV 2024. Authors: Wanyue Zhang, Rishabh Dabral, Thomas Leimkühler, Vladislav Golyanik†, Marc Habermann†, Christian Theobalt.
Large Motion Model for Unified Multi-Modal Motion Generation
Official repository of "CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement".
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
[ ECCV 2024 ] MotionLCM: This repo is the official implementation of "MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model"
The official PyTorch implementation of the paper "Human Motion Diffusion Model"
[ICCV 2023] Official PyTorch implementation of the paper "InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion"
MotionFix: Text-Driven 3D Human Motion Editing [SIGGRAPH ASIA 2024]
[CVPR2025] We present StableAnimator, the first end-to-end ID-preserving video diffusion framework, which synthesizes high-quality videos without any post-processing, conditioned on a reference ima…
[ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation
[SIGGRAPH 2025] Official code of the paper "FlexiAct: Towards Flexible Action Control in Heterogeneous Scenarios"
[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs