This source code has been developped to allow python and these libraries communicate with Unity Engine. This use case is using Ultralytics's YoloV8 and is able to send position information to unity in order to create interactions and animations with it.
- Unity
2022.3.13f1or later - Python
3.8or later - Make sure you have a webcam or a video file to test the project
- This installation guide has been tested on Linux and MacOS, but it should work on Windows too with some modifications
.
├── configs/ # Tracker configuration presets
├── docs/assets/ # Images and plots used in the documentation
├── scripts/ # Helper scripts (e.g. Windows installer)
├── src/pywui/ # Python package source code
├── tests/ # Automated tests
├── Makefile # Common development tasks
├── pyproject.toml # Build system and packaging metadata
└── UnityProject/ # Unity sample projectClone this public repo with wherever you want
git clone https://github.com/mathis-lambert/python-yolo-wrapper-for-unity-interactions yolo_with_unity
cd yolo_with_unityIt's not a docker project for now, but it should be later...
init the project with these commands : this will create a virtual env and install the dependencies
python3 -m venv .venv
source .venv/bin/activate # On windows : .venv\Scripts\activate
make install # Windows without make? run scripts/install-windows.bat insteadOpen the project with Unity 2022.3.13f1 or later. You can find the project in the folder UnityProject/.
- Click on the ADD button in the Unity Hub and select the folder
UnityProject/ - Select the project and click on the OPEN button
Open a terminal and run these commands :
This will run the script src/pywui/main.py with the --help option to show you the available options
source .venv/bin/activate
pywui --helpUse your webcam
pywui --source 0Show the webcam output
pywui --source 0 --showUse video file
pywui --source path/to/video.mp4You can also execute the main module directly during development:
python -m pywui.main --helpOpen a terminal and run these commands :
source .venv/bin/activate # On windows : .venv\Scripts\activate
pywui-testThe use case is simple, the python script will send the position of the detected object to Unity. Then Unity will move the object to the position sent by python.
Run the command :
pywui --model ./models/yolov8s-pose.pt --detect-method track --source 0 --conf 0.7 --filter- Open the scene
Assets/Scenes/Main.unity - Click on the play button
- You should see as many articulation groups as people detected by the python script and the articulations should move to the position of the people detected

- Open the scene
Assets/Scenes/Articulation.unity - Click on the play button
- You should see as many articulation groups as people detected by the python script and the articulations should move to the position of the people detected

python3might not work, usepythoninstead- If you have an error with
pywuicommand, try to runsource .venv/bin/activateagain and thenpywuishould work - If you have an error with
pywui-testcommand, try to runsource .venv/bin/activateagain and thenpywui-testshould work
If you want to collaborate on this open source repo, you're free to do so.
- Fork the repo
- Create your own branch
- Develop your feature
- And with a PR describe the purpose of your feature