This class is a broad overview and dive into Exploiting AI and the different attacks that exist, and best practice strategies.
-
Updated
Sep 14, 2025 - Python
This class is a broad overview and dive into Exploiting AI and the different attacks that exist, and best practice strategies.
APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
Experiments on Data Poisoning Regression Learning
CCS'22 Paper: "Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation"
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)
Analyzing Adversarial Bias and the Robustness of Fair Machine Learning
Code for the paper Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems.
A backdoor attack in a Federated learning setting using the FATE framework
[NeurIPS 2022] Can Adversarial Training Be Manipulated By Non-Robust Features?
TOAN is a toolkit designed to simplify the generation of poisoned datasets for machine learning robustness research.
A research framework for implementing and evaluating poisoning attacks on Retrieval-Augmented Generation (RAG) systems, enabling the study of their security vulnerabilities.
Implementation of backdoor attacks and defenses in malware classification using machine learning models.
federated learning framework built with Flower and PyTorch to evaluate the robustness of FL systems under data poisoning attacks.
An experiment in backdooring a shell safety classifier by planting a hidden trigger in its training data.
🛡️ PROACT: PROjection and Activation Constrained Training for poisoning-resilient continual learning
Add a description, image, and links to the data-poisoning topic page so that developers can more easily learn about it.
To associate your repository with the data-poisoning topic, visit your repo's landing page and select "manage topics."