PASTA Web Interface provides a user-friendly GUI for spatial phenotype prediction on pathology images. No coding or JSON configuration required - complete the entire prediction workflow through your browser.
Please follow the introduction in README.md.
python web_ui.pyLaunch parameters:
--host: Host address (default: 0.0.0.0)--port: Port number (default: 7860)
Examples:
# Simply run
python web_ui.py --port 7860
# Specify GPU device
CUDA_VISIBLE_DEVICES=0 python web_ui.pyOpen browser and visit:
- Local access:
http://localhost:7860 - LAN access:
http://<your-ip>:7860
The interface is divided into left and right panels:
Left Panel - Input Configuration
- File Upload: Upload WSI image files
- Model Configuration: Select backbone model and pathway configuration
- Prediction Parameters: Set prediction mode and downsampling factor
- Advanced Options: Adjust patch size, step size, and other advanced parameters
Right Panel - Prediction Results
- Status Display: Real-time task progress
- Heatmaps: View prediction heatmaps for each pathway
- Overlay Images: View predictions overlaid on H&E images
- Statistics: View prediction statistics
- Download: Download complete results in h5ad format
Supported formats:
.svs- Aperio format.tif/.tiff- TIFF format.mrxs- MIRAX format.ndpi- Hamamatsu format
Click the "Upload WSI File" area and select your pathology image file.
Backbone Model
- System automatically scans model weight files in
model/directory - Select corresponding backbone model (e.g., UNI, Virchow2, Gigapath)
- Then select specific weight file
Pathway Configuration
default_14: 14 tumor microenvironment pathwaysdefault_16: 16 tumor microenvironment pathways313_Xenium: 313 gene pathways100_rep_genes: 100 representative genes- Other (check the Hugging Face repo for all available models)
Specify Pathways (Optional)
- To predict specific pathways only, enter them here
- Comma-separated, e.g.,
CAF, T-cells, B-cells - Leave empty to predict all pathways
Prediction Mode
pixel: High-resolution pixel-level prediction (recommended for detailed analysis)spot: Fast spot-level prediction (recommended for batch processing)
Downsampling Factor (1-20)
- Higher values result in lower output resolution but faster processing
- Recommended values:
- Detailed analysis: 4-6
- Regular analysis: 10
- Quick preview: 15-20
Include TLS Prediction
- Check this option to additionally predict Tertiary Lymphoid Structures (TLS)
- Patch Size: Size of extracted image patches (default 256)
- Step Size: Step size between patches (default 128, smaller values produce denser predictions)
- Image Size: Output image dimensions (default 10)
- Device: GPU device (e.g., cuda:0, cuda:1) or cpu
- Save TIFF: Whether to save QuPath-compatible TIFF files
- Colormap: Color scheme for heatmaps
- Click "🚀 Start Prediction" button
- After task submission, system displays task ID
- Progress bar updates in real-time (auto-refresh every 3 seconds)
- Manually click "🔄 Refresh Status" button if needed
After prediction completes, view results in right-side tabs:
Heatmaps Tab
- Displays prediction heatmaps for each pathway
- Click images to enlarge
Overlay Tab
- Shows predictions overlaid on original H&E images
- More intuitive understanding of spatial distribution
Statistics Tab
- Displays statistics for each pathway (mean, std, min, max)
- Shows processing time and other information
Download Tab
- Download complete results in h5ad format
- Can be analyzed further with Scanpy, Squidpy, and other tools
All task files are stored in results/web_tasks/ directory:
results/web_tasks/
├── <timestamp>_<filename>/
│ ├── wsi/ # Original WSI files
│ ├── patches/ # Extracted patches
│ ├── masks/ # Tissue segmentation masks
│ ├── edge_info/ # Edge information
│ └── predictions/ # Prediction results
│ └── <sample_id>/
│ ├── plots/ # Heatmaps
│ ├── plots_overlay/ # Overlay images
│ └── *.h5ad # AnnData files
- System automatically cleans up tasks older than 24 hours on startup
- Manually delete folders in
results/web_tasks/directory if needed
To use Web Interface in Docker, expose the port in Dockerfile:
# Add to existing Dockerfile
EXPOSE 7860
# Launch command
CMD ["python", "web_ui.py", "--host", "0.0.0.0", "--port", "7860"]# With GPU support
docker run --gpus all -p 7860:7860 -it mengflz/pasta:latest python web_ui.py
# Using Podman
podman run --device nvidia.com/gpu=all -p 7860:7860 -it mengflz/pasta:latest python web_ui.pyThen visit http://localhost:7860
For proper model file recognition, use filenames containing model names:
model/
├── UNI_pos_embed.pt
├── Virchow2_no_pos_embed.pt
├── gigapath_weights.pt
└── Phikonv2_no_pos_embed.pt
For very large WSI files (>5GB):
- Use higher downsampling factors (8-16)
- Consider spot mode instead of pixel mode
- Ensure sufficient disk space (at least 3x the WSI file size)
While the Web Interface is designed for single-file processing, for batch processing:
- Open multiple browser tabs to submit tasks simultaneously
- Or use command-line tools for batch processing (see README.md)
For remote access to Web Interface, use SSH port forwarding:
# Execute on local machine
ssh -L 7860:localhost:7860 user@remote-server