- Python 3.9 or higher
- CUDA 12.x (for GPU support)
- At least 8GB of RAM
- 10GB of free disk space
git clone https://github.com/or4k2l/robust-vision.git
cd robust-visionpython -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activatepip install --upgrade pip
pip install -r requirements.txtpip install -e .The default installation includes JAX with CUDA 12 support:
pip install "jax[cuda12]>=0.4.20"If you need CUDA 11 support:
pip install "jax[cuda11_cudnn82]>=0.4.20"For CPU-only installation:
pip install jax>=0.4.20docker build -t robust-vision:latest .docker run --gpus all \
-v $(pwd)/data:/app/data \
-v $(pwd)/checkpoints:/app/checkpoints \
-v $(pwd)/logs:/app/logs \
robust-vision:latestdocker run --gpus all -it \
-v $(pwd):/app \
robust-vision:latest bashVerify your installation:
import jax
import flax
import tensorflow as tf
print(f"JAX version: {jax.__version__}")
print(f"Flax version: {flax.__version__}")
print(f"TensorFlow version: {tf.__version__}")
print(f"JAX devices: {jax.devices()}")If JAX doesn't detect your GPU:
- Check CUDA installation:
nvcc --version - Reinstall JAX with CUDA support
- Check CUDA driver compatibility
If you encounter OOM errors:
- Reduce batch size in config
- Use gradient accumulation
- Enable mixed precision training
TensorFlow warnings about GPU can be ignored if you're only using it for data loading.
!pip install git+https://github.com/or4k2l/robust-vision.gitUse Deep Learning AMI with CUDA 12:
aws ec2 run-instances \
--image-id ami-xxxxxxxxx \
--instance-type p3.2xlarge \
--key-name your-keyUse Deep Learning VM Image:
gcloud compute instances create robust-vision \
--zone=us-central1-a \
--machine-type=n1-standard-8 \
--accelerator=type=nvidia-tesla-v100,count=1 \
--image-family=common-cu121 \
--image-project=deeplearning-platform-release- Read TRAINING.md for training instructions
- Check DEPLOYMENT.md for deployment options
- Explore example notebooks in
notebooks/