Skip to content

mali-afridi/ADAS

Repository files navigation

Advanced Driver Assistance System

The official implementation of our Final Year Project: Transformer Application on Computer Vision: Advanced Driver Assistance System

What's New

  • Added support of streamlit and custom Isalamabad Roads Inference Visualization

Method

We modify Real Time Detection Transformer by replacing ResNet34 backbone into DLA34 backbone. We then fuse this modified RT-DETR with CLRerNet to achieve multi modality i.e. Lane Segmentation + Object Detection

Performance

Our modified RT-DETR with DLA34 backbone (i.e RT-DETR-D34) achieves the state-of-the-art performance on COCO benchmark surpassing the original RT-DETR Resnet 34 backbone based results and maintaining the CULane benchmark and model complexity for object detection at the same time!

Model Backbone Task Dataset APval APval50 Params (M) FPS (1080ti) FPS (1660ti)
RT-DETR-R34 Resnet-34 Object COCO 48.9 66.8 31 33.3 17.22
RT-DETR-D34 (ours) DLA-34 Object COCO 49.8 67.4 33 32.9 17.10
CLRerNet-Transformer-D (ours) DLA-34 Object + Lane COCO 49.8 67.4 49 32.9 16.70
CLRerNet-Transformer-R (ours) Resnet-50 Object + Lane COCO 53.1 71.3 58 26.63 14.3

Download the weights: RT-DETR-D34 (ours), CLRerNet-Transformer-D34 (ours), CLRerNet-Transformer-R34 (ours)
Place the weights in the main folder i.e. ADAS/

Install

Docker environment is recommended for installation:

docker-compose build --build-arg UID="`id -u`" dev
docker-compose run --rm dev

See Installation Tips for more details.

Inference on Culane Test Dataset

  • Downdload culaneyolo.zip and extract it in ADAS/dataset2/culaneyolo.
  • Download culane dataset and place the entire folder in ADAS/dataset2/culane. The culane data along with the culaneyolo structure is as follows:
ADAS/dataset2
├── culane
│   ├── annotations_new/
│   ├── driver_23_30frame/
│   ├── driver_37_30frame/
│   ├── driver_100_30frame/
│   ├── driver_161_90frame/
│   ├── driver_182_30frame/
│   ├── driver_193_90frame/
│   ├── laneseg_label_w16/
│   ├── laneseg_label_w16_test/
│   └── list/
└── culaneyolo

Run the following command to detect the objects and lanes from the culane test images and visualize them:

Light Model CLRerNet-Transformer-D34
python demo/ali.py configs/clrernet/culane/rtdetr_clrernet2.py ClrerNet_Transformer_D14.pth
Heavy Model CLRerNet-Transformer-R34
python demo/ali.py configs/clrernet/culane/clrernet_culane_rtdetr.py ClrerNet_Transformer_R14.pth 

This will save each frame in ADAS/result_dl folder if not given any --out-file directory

Example Output: result_dl.zip

Inference on Islamabad Roads

Download the text file for frames: isb.txt Download the frame: isb.zip Extract isb.zip into ADAS/isb folder Run the following command to detect the objects and lanes from the image and visualize them:

Light Model CLRerNet-Transformer-D34
python tools/Disb.py configs/clrernet/culane/rtdetr_clrernet.py ClrerNet_Transformer_D14.pth isb.txt --out-file=fyp_inference

This will save each frame in ADAS/fyp_inference folder if not given any --out-file directory

Example Output: fyp_inference.zip

Train - To be updated

Make sure that the frame difference npz file is prepared as dataset/culane/list/train_diffs.npz.
Run the following command to train a model on CULane dataset:

python tools/train.py configs/clrernet/culane/clrernet_culane_dla34.py

References

About

Joint Object Detection and Lane Segmentation using fused modified Real Time Detection Transformer and CLRerNet

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages