-
Tsinghua University
- Beijing
- https://chufengt.github.io/
Stars
We release Evo-RL, the opensource real-world offline RL on So-101 and AgileX PiPER for easier reproduction.
InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation
GigaBrain-0: A World Model-Powered Vision-Language-Action Model
Code for kai0, including training, inference and data collection.
Building General-Purpose Robots Based on Embodied Foundation Model
Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
Spirit-v1.5: A Robotic Foundation Model by Spirit AI
Running VLA at 30Hz frame rate and 480Hz trajectory frequency
RoboBrain 2.5: Advanced version of RoboBrain. Depth in Sight, Time in Mind. 🎉🎉🎉
一个基于nano banana pro🍌的原生AI PPT生成应用,迈向真正的"Vibe PPT"; 支持上传任意模板图片;上传任意素材&智能解析;一句话/大纲/页面描述自动生成PPT;口头修改指定区域、一键导出可编辑ppt - An AI-native slides generator based on nano banana pro🍌
Tensor's VLA Training Infrastructure for Real-World Robotics in PyTorch
A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation
verl: Volcano Engine Reinforcement Learning for LLMs
RLinf: Reinforcement Learning Infrastructure for Embodied and Agentic AI
Dexbotic: Open-Source Vision-Language-Action Toolbox
NVIDIA Isaac GR00T N1.6 - A Foundation Model for Generalist Robots.
Machine Learning Engineering Open Book
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
每个人都能看懂的大模型知识分享,LLMs春/秋招大模型面试前必看,让你和面试官侃侃而谈
[Lumina具身智能社区] 具身智能技术指南 Embodied-AI-Guide
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!

