Uncertainty-Driven Segmentation Framework for Deep Learning Research
π§ Work in Progress: This framework is under active development. Core architecture is stable, but training pipelines and full model implementations are still being finalized.
SegInit introduces a novel approach to image segmentation using uncertainty-driven learning and attention-based patch processing. The framework is designed for researchers working on medical imaging, autonomous driving, and other precision segmentation tasks.
- π― Uncertainty-Driven Layers: Neural network components that adapt to prediction uncertainty, focusing computational resources on ambiguous regions
- π¦ Attention-Based Cropping: Dynamic patch selection mechanism that identifies and processes the most relevant image regions
- π§ Configuration-Driven Architecture: YAML-based layer composition allowing rapid experimentation with different network topologies
- β‘ Flexible Skip Connections: Configurable layer skipping for architectural optimization
SegInit Framework
βββ Core Layers # Building blocks (CoreConv, UncertaintyDrivenLayer)
βββ Base Components # Intermediate layers (ResidualBlock, ConvBlock)
βββ Structural Models # Complete architectures (ACPNet, Encoders)
βββ Configuration # YAML-driven layer definitions
# Clone the repository
git clone https://github.com/luiserrador/SegInit.git
cd SegInit
# Install in development mode
pip install -e ".[dev]"import torch
from seginit.layers import UncertaintyDrivenLayer, CoreConv
# Create uncertainty-aware convolutional layer
layer = UncertaintyDrivenLayer(
in_channels=3,
out_channels=64,
consistency_channels=16
)
# Process sample input
x = torch.randn(1, 3, 256, 256)
output = layer(x)
print(f"Output shape: {output.shape}") # [1, 64, 256, 256]
# Use configuration-driven approach
core_layer = CoreConv(
in_channels=3,
out_channels=64,
dropout_p=0.2,
skip_layers='normalization' # Skip batch normalization
)| Component | Status | Notes |
|---|---|---|
| β Core Layers | Complete | UncertaintyDrivenLayer, CoreConv implemented |
| β Base Components | Complete | ResidualBlock, ConvBlock, AttentionCroppingAndPacking |
| π§ Structural Models | In Progress | ACPNet architecture being finalized |
| β³ Training Pipeline | Planned | End-to-end training scripts coming soon |
| β³ Pre-trained Models | Planned | Checkpoints for common datasets |
The framework uses YAML configuration files to define layer architectures:
# Example: Custom convolutional block
base:
CustomConvBlock:
convolution:
module: "Conv2d"
kernel_size: 3
padding: 1
normalization:
module: "BatchNorm2d"
activation:
module: "ReLU"
inplace: true
dropout:
module: "Dropout2d"
p: 0.1- Python 3.8 or higher
- PyTorch 2.0+
- CUDA (optional, for GPU acceleration)
# Install development dependencies
pip install -e ".[dev]"
# Install pre-commit hooks (recommended for contributors)
pre-commit install
# Verify installation
pytest tests/ -v# Run tests across Python versions
tox
# Run specific test suite
pytest tests/test_layers.py -v
# Check code quality
flake8 src tests
black --check src testsWe welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Install pre-commit:
pre-commit install - Make your changes with tests
- Ensure code passes:
tox - Commit your changes:
git commit -m 'Add amazing feature' - Push to your branch:
git push origin feature/amazing-feature - Open a Pull Request
- Formatting: Black with 120 character line length
- Import sorting: isort with black profile
- Linting: flake8 with complexity limit of 10
- Testing: pytest with comprehensive coverage
- Type hints: Encouraged for new code
make install # Install package in development mode
make test # Run tests
make test-all # Run tests across all Python versions
make lint # Check code quality
make format # Auto-format code
make clean # Clean build artifactsThis framework is designed for:
- Medical Image Segmentation: Tumor detection, organ segmentation
- Autonomous Driving: Lane detection, object segmentation
- Remote Sensing: Land use classification, change detection
- General Computer Vision: Any task requiring precise pixel-level predictions
- Complete ACPNet implementation
- Add training pipeline for standard datasets
- Comprehensive documentation and tutorials
- Pre-trained models for COCO, Cityscapes
- Benchmarking against state-of-the-art methods
- Plugin system for custom uncertainty metrics
- Integration with popular frameworks (MMSegmentation, Detectron2)
- Mobile-optimized model variants
- Interactive demo and web interface
If you use SegInit in your research, please cite:
@software{seginit2024,
title={SegInit: Uncertainty-Driven Segmentation Framework},
author={Serrador, Luis},
year={2024},
url={https://github.com/luiserrador/SegInit}
}This project is licensed under the MIT License - see the LICENSE file for details.
- Built with PyTorch and modern Python packaging standards
- Inspired by uncertainty quantification research in computer vision
- Special thanks to the open-source computer vision community
π Links: Documentation | Issues | Discussions