Visualizing the hidden structure of deep neural networks through activation maps.
π Related article: NeuroScan: Your Model, I β Visualizing Activation Maps
NeuroScan is an interpretability tool for neural networks. It generates layer-wise activation visualizations from a given model and dataset. With a focus on transparency, it aims to help researchers and practitioners see how internal representations evolve across depth, especially in convolutional or transformer-based architectures.
- β Layer-by-layer activation maps for image models
- β Works with PyTorch models (CNNs, ResNets, Vision Transformers)
- β Heatmap overlays and channel-wise visualizations
- β Simple hooks for grabbing activations during inference
- β Easy-to-use Jupyter interface
git clone https://github.com/Mircus/NeuroScan.git
cd NeuroScanpip install -r requirements.txtjupyter notebook NeuroScan_Demo.ipynbNeuroScan generates intuitive visualizations like these:
- Feature maps from intermediate convolution layers
- Overlayed activation heatmaps on input images
- Comparative view across multiple layers or architectures
(Insert sample image or animated gif if available)
- Hooks are registered on the layers of interest
- During a forward pass, activations are captured and stored
- Post-processing (e.g. normalization, colormapping)
- Display as overlays or tiled maps
You can customize which layers to hook, how to aggregate activations, and how to display them.
- PyTorch
- Matplotlib & Seaborn
- Torchvision models
- Jupyter Notebook (or Colab)
MIT License β see LICENSE file for details.
Issues and pull requests are welcome. If you use NeuroScan in your research or writing, please consider citing or linking the Medium article.
This is part of the Holomathics open research ecosystem.