-
École des Ponts ParisTech
- https://imagine.enpc.fr/~varolg/
Highlights
- Pro
Stars
[NeurIPS 2023] Official implementation of the paper "Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset"
a text-to-gloss-to-pose-to-video pipeline for spoken to signed language translation
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
The official implementation of the paper "Human Motion Diffusion as a Generative Prior"
Official PyTorch implementation of the paper "TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis" ICCV 2023
Official PyTorch implementation of the paper "SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation" [ICCV 2023]
Official Repository of ChatCaptioner
Productive, portable, and performant GPU programming in Python.
[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators
[CVPR'23 Highlight] AutoAD: Movie Description in Context.
Official ECCV 2022 repository for SUPR: A Sparse Unified Part-Based Human Representation
The best way to write secure and reliable applications. Write nothing; deploy nowhere.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Repository for the paper "Human Mesh Recovery from Multiple Shots"
Code for the Foot Implicit Neural Deformation model
[CVPR'19 Best Paper Finalist] Extracting 3D human motion and contact forces from a single video
[CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model
[ECCV 2022] SAGA: Stochastic Whole-Body Grasping with Contact
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Code for "MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare", CoRL 2022.
Python code for "Probabilistic Machine learning" book by Kevin Murphy
A feature-rich command-line audio/video downloader
The official PyTorch implementation of the paper "Human Motion Diffusion Model"
Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners


