Skip to content

luiserrador/SegInit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

21 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

SegInit

Uncertainty-Driven Segmentation Framework for Deep Learning Research

Python License: MIT Tests Code style: black

🚧 Work in Progress: This framework is under active development. Core architecture is stable, but training pipelines and full model implementations are still being finalized.

Overview

SegInit introduces a novel approach to image segmentation using uncertainty-driven learning and attention-based patch processing. The framework is designed for researchers working on medical imaging, autonomous driving, and other precision segmentation tasks.

Key Innovations

  • 🎯 Uncertainty-Driven Layers: Neural network components that adapt to prediction uncertainty, focusing computational resources on ambiguous regions
  • πŸ“¦ Attention-Based Cropping: Dynamic patch selection mechanism that identifies and processes the most relevant image regions
  • πŸ”§ Configuration-Driven Architecture: YAML-based layer composition allowing rapid experimentation with different network topologies
  • ⚑ Flexible Skip Connections: Configurable layer skipping for architectural optimization

Architecture

SegInit Framework
β”œβ”€β”€ Core Layers          # Building blocks (CoreConv, UncertaintyDrivenLayer)
β”œβ”€β”€ Base Components      # Intermediate layers (ResidualBlock, ConvBlock)
β”œβ”€β”€ Structural Models    # Complete architectures (ACPNet, Encoders)
└── Configuration        # YAML-driven layer definitions

Quick Start

Installation

# Clone the repository
git clone https://github.com/luiserrador/SegInit.git
cd SegInit

# Install in development mode
pip install -e ".[dev]"

Basic Usage

import torch
from seginit.layers import UncertaintyDrivenLayer, CoreConv

# Create uncertainty-aware convolutional layer
layer = UncertaintyDrivenLayer(
    in_channels=3,
    out_channels=64,
    consistency_channels=16
)

# Process sample input
x = torch.randn(1, 3, 256, 256)
output = layer(x)
print(f"Output shape: {output.shape}")  # [1, 64, 256, 256]

# Use configuration-driven approach
core_layer = CoreConv(
    in_channels=3,
    out_channels=64,
    dropout_p=0.2,
    skip_layers='normalization'  # Skip batch normalization
)

Development Status

Component Status Notes
βœ… Core Layers Complete UncertaintyDrivenLayer, CoreConv implemented
βœ… Base Components Complete ResidualBlock, ConvBlock, AttentionCroppingAndPacking
🚧 Structural Models In Progress ACPNet architecture being finalized
⏳ Training Pipeline Planned End-to-end training scripts coming soon
⏳ Pre-trained Models Planned Checkpoints for common datasets

Configuration System

The framework uses YAML configuration files to define layer architectures:

# Example: Custom convolutional block
base:
  CustomConvBlock:
    convolution:
      module: "Conv2d"
      kernel_size: 3
      padding: 1
    normalization:
      module: "BatchNorm2d"
    activation:
      module: "ReLU"
      inplace: true
    dropout:
      module: "Dropout2d"
      p: 0.1

Development Setup

Prerequisites

  • Python 3.8 or higher
  • PyTorch 2.0+
  • CUDA (optional, for GPU acceleration)

Environment Setup

# Install development dependencies
pip install -e ".[dev]"

# Install pre-commit hooks (recommended for contributors)
pre-commit install

# Verify installation
pytest tests/ -v

Testing

# Run tests across Python versions
tox

# Run specific test suite
pytest tests/test_layers.py -v

# Check code quality
flake8 src tests
black --check src tests

Contributing

We welcome contributions! Please see our contributing guidelines:

Development Workflow

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Install pre-commit: pre-commit install
  4. Make your changes with tests
  5. Ensure code passes: tox
  6. Commit your changes: git commit -m 'Add amazing feature'
  7. Push to your branch: git push origin feature/amazing-feature
  8. Open a Pull Request

Code Standards

  • Formatting: Black with 120 character line length
  • Import sorting: isort with black profile
  • Linting: flake8 with complexity limit of 10
  • Testing: pytest with comprehensive coverage
  • Type hints: Encouraged for new code

Development Commands

make install        # Install package in development mode
make test          # Run tests
make test-all      # Run tests across all Python versions
make lint          # Check code quality
make format        # Auto-format code
make clean         # Clean build artifacts

Research Applications

This framework is designed for:

  • Medical Image Segmentation: Tumor detection, organ segmentation
  • Autonomous Driving: Lane detection, object segmentation
  • Remote Sensing: Land use classification, change detection
  • General Computer Vision: Any task requiring precise pixel-level predictions

Roadmap

Short Term (Q1 2025)

  • Complete ACPNet implementation
  • Add training pipeline for standard datasets
  • Comprehensive documentation and tutorials

Medium Term (Q2 2025)

  • Pre-trained models for COCO, Cityscapes
  • Benchmarking against state-of-the-art methods
  • Plugin system for custom uncertainty metrics

Long Term

  • Integration with popular frameworks (MMSegmentation, Detectron2)
  • Mobile-optimized model variants
  • Interactive demo and web interface

Citation

If you use SegInit in your research, please cite:

@software{seginit2024,
  title={SegInit: Uncertainty-Driven Segmentation Framework},
  author={Serrador, Luis},
  year={2024},
  url={https://github.com/luiserrador/SegInit}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built with PyTorch and modern Python packaging standards
  • Inspired by uncertainty quantification research in computer vision
  • Special thanks to the open-source computer vision community

πŸ”— Links: Documentation | Issues | Discussions

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages