| 📝 Paper | 🗃️ Download Eval Dataset |
This repository provides a pipeline for anonymizing data & train/evaluate YOLO object detection models on anonymized data.
Features:
- Face and full-body anonymization via DeepPrivacy2
- YOLO training with flexible configuration
- Evaluation using enhanced COCO metrics (AP, mAP, F1/F2, SSIM) and MATLAB scripts for graphs
Interested in a more detailed view of our experiments?
See our workflows for single experiments.
Interested in our training and finetuning?
See our training and fine-tuning doc.
Interested in our own Eval Dataset?
Download it here.
For easy overview of dependencies we added a single Dockerfile for every module.
Modules did run fused together on our GPU-server. As training of larger YOLO needs a lot of VRAM, we recommend this for future users of this repo/code.
⚠ Dockerfiles contain hardware-related versions (e.g. CUDA) and paths - Please compare with used hardware before you start.
⚠ Configure volume mounts and all paths to suit your system and structure - see Dockerfiles and configuration files depending on what you want to do.
⚠ If you want to run other experiments with other settings, we highly advise you to carefully check all our configuration files as this pipeline is highly dependent on these. Also check the Configuration section of this README.
Build Docker images from within the repo folder with:
docker compose -f docker/build_all.yml buildRun container and enter container:
docker compose -f privacy_docker-compose.yml updocker compose exec -it deepprivacy2 bashConfigure data paths and wanted anonymization mode and run Anonymization with python3 anonymize.py.
Full-Body Anonymization:
python3 anonymize.py configs/anonymizers/FB_cse.py -i /root/data/image.jpg --output_path /root/data/output_image.pngFace Anonymization:
python3 anonymize.py configs/anonymizers/face.py -i data/input.jpg --output_path data/output.png
Run container and enter container:
docker compose -f train_docker-compose.yml updocker compose exec -it train_yolo bashDepending on the goal of the training, we supply different bash scripts. Training parameters for YOLO, paths to the data, and other arguments for the training script should be edited accordingly. Detailed information about configurations can be found below.
YOLOv10 over all model sizes:
./src/train_YOLOv10/train_all_model_sizes.shFinetune a YOLOv10 model with diffrent layer configurations:
./src/train_YOLOv10/train_all_model_sizes.shRun container and enter container:
docker compose -f eval_docker-compose.yml updocker compose exec -it eval bashEvaluation utilizes the COCO API and enhances it with F1,F2 scores through fdet-api. Depending on the desired metric and model we supply different scripts. For beautiful graphs run MATLAB files - results need to be parsed into .csv files.
AP, mAP, F1- & F2-scores
# all model sizes
./src/eval/eval_all_model_sizes.sh
# models with frozen layers
./src/eval/freeze_eval.sh
# or run script by direct call
python3 src/eval/run_eval.py -config "path to eval config" -net "folder name for model"⚠ Pay attention to wanted folder structure! - Details in configuration section.
SSIM (m-size model)
./src/eval/eval_m_ssim.shParse results to .csv:
./src/eval/helper/parse_results.pyThe modules Training and Evaluation use thier own specific configuration file. These files can be found within the config folder.
This section gives a summary about thier general structure through some minimal examples.
DATA_CONFIG_PATH: To generate the file defining used classes see Ultralytics documentation.
⚠ Configure our path to the used data (images and annotation) within the file given through this path.
Model parameter for the training of YOLO can be set in the bash script or in command line when calling train.py directly. Currently supported parametrs are (description at parameter description of Ultralytics):
- epochs
- batch size
- img size
- optimizer
- momentum
- weight decay
- freeze
- lr0
- lrf
- warmup epochs
- warmup momentum
Additional parameters are:
model_path: Path to the pretrained model or None for
untrained_model: Untrained model size configuration and path; default='yolov10n.yaml', will load required file automatically
⚠ If model_path is set the module will always try to load this model. Setting untrained_model wont do anything in this case - its only for our own documentation of you training config if a model_path is given.
# Define the path to your data configuration file
DATA_CONFIG_PATH="path_to_repo/config/train/fb_coco10.yaml"
# Optional parameters
EPOCHS=100
BATCH_SIZE=40
IMG_SIZE=640
OPTIMIZER="SGD"
# Array of model sizes
declare -a ModelSizes=("yolov10n.yaml" "yolov10s.yaml" "yolov10m.yaml" "yolov10l.yaml" "yolov10x.yaml")
# Loop through the model sizes and run the training script for each
for MODEL_SIZE in "${ModelSizes[@]}"
do
echo "Starting training for model size: $MODEL_SIZE"
python3 train.py "$DATA_CONFIG_PATH" --untrained_model=$MODEL_SIZE --epochs=$EPOCHS --batch_size=$BATCH_SIZE --img_size=$IMG_SIZE --optimizer=$OPTIMIZER --momentum=0.937 --weight_decay=0.0005
echo "Training completed for $MODEL_SIZE"
done
echo "All models trained and saved successfully."We set on a specific file structure to load the model from, wich is derived from the way Ultralytics saves the trained weights:
model_path + "/" + script argument "-net" used as specific folder name + "/weights/last.pt"
Example path:
/path_to_repo/src/train_YOLOv10/runs/detect/fb_yolov10m/weights/last.pt
Set evaluation parameter in .yaml as descriped in this example:
scene_name: 'fb_on_own_hd' # name of dataset, used as folder name to seve results
path_anno: '/path/to/dataset/annotations/fixed_id_instances.json' # path of coco annotations
path_imgs: '/path/to/own_dataset/images/' # path to imgs
model_path: '/this repo/src/train_YOLOv10/runs/detect' # path to model parameter file .pt
image_file_format: 'png' # file format of img, used to automatically grap all image paths
# array of wanted img ids (null means all ids)
img_ids: null # img to be evaluated on
cat_ids: null # wanted ids
# evaluation save paths
path_detections: '/this repo/data/detections' # path where to save detections
path_eval: '/this repo/data/eval' # main path where to save evaluation results, gets extendet by scene name- Docker & Docker compose
- NVIDIA Container Toolkit
⚠ Files contain Hardware related versions (e.g. Cuda) and Paths - Please compare with used Hardware before you start.
For every component see specific docker and docker compose file for further dependencies.
This project uses code originally developed by:
- Hukkelås, Håkon and Lindseth, Frank - DeepPrivacy2
- yhsmiley - fdet-api
- Ultralytics
- COCO API
