pytorch实现Grad-CAM和Grad-CAM++,可以可视化任意分类网络的Class Activation Map (CAM)图,包括自定义的网络;同时也实现了目标检测faster r-cnn和retinanet两个网络的CAM图;欢迎试用、关注并反馈问题...
-
Updated
Jan 13, 2021 - Python
pytorch实现Grad-CAM和Grad-CAM++,可以可视化任意分类网络的Class Activation Map (CAM)图,包括自定义的网络;同时也实现了目标检测faster r-cnn和retinanet两个网络的CAM图;欢迎试用、关注并反馈问题...
Class Activation Map (CAM) Visualizations in PyTorch.
surrogate quantitative interpretability for deepnets
Official repository for the paper "Instance-wise Causal Feature Selection for Model Interpretation" (CVPRW 2021)
This article explores the theory behind explainable car pricing using value decomposition, showing how machine learning models can break a predicted price into intuitive components such as brand premium, age depreciation, mileage influence, condition effects, and transmission or fuel-type adjustments.
Code for "Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability" (https://arxiv.org/abs/2010.09750)
Implementation of the Grad-CAM algorithm in an easy-to-use class, optimized for transfer learning projects and written using Keras and Tensorflow 2.x
Universal probing and interpretability tool for MLX language models on Apple Silicon
squid repository for manuscript analysis
Explainable AI (XAI) based system for detecting financial fraud using machine learning, with model interpretability, analysis, and research-backed implementation.
Scripts and trained models from our paper: M. Ntrougkas, N. Gkalelis, V. Mezaris, "T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers", IEEE Access, 2024. DOI:10.1109/ACCESS.2024.3405788.
Exercise on interpretability with integrated gradients.
Predictive modelling pipeline for customer or donor behaviour, with model comparison, ROC evaluation, and SHAP based interpretability using privacy safe features.
Autonomous Metal is an autonomous AI workflow designed to mimic a quantitative commodity analyst, transforming market data and economic indicators into explainable forecasts and analyst-style insights for LME Aluminum price movements.
A lightweight Explainable AI CNN for PathMNIST medical imaging, achieving 91%+ accuracy with Integrated Gradients and SQLite-based attribution storage. Built in PyTorch, this scalable model delivers high performance, transparency, and real-world readiness, making it ideal for medical AI, edge deployment, and explainable deep learning research.
code and difference of resolution for visualizing the loss landscape of a GAN and understanding what a loss landscape is
🎯 Deep Learning Model Analysis Made Easy: Visualize and understand your model's behavior, attention patterns, and decision boundaries with interactive visualizations.
4.76x Faster Attribution Graph Generation for LLMs and VLMs - Achieves 79% speedup by eliminating Python loops and vectorizing GPU operations. Works with GPT, LLaMA, Qwen, LLaVA, CLIP
Built and deployed a Flask-based machine learning system to predict loan default risk using customer demographics and financial indicators. Applied advanced ensemble models like XGBoost and LightGBM to achieve ~99% accuracy. Designed a full-stack solution with real-time prediction capabilities, enabling faster, smarter loan decisions in banking.
🔍 Streamline tabular binary classification with model interpretability and SHAP consistency analysis for clear insights and robust evaluation.
Add a description, image, and links to the model-interpretability topic page so that developers can more easily learn about it.
To associate your repository with the model-interpretability topic, visit your repo's landing page and select "manage topics."