# Awesome Fools [](https://awesome.re)
A curated list of adversarial samples. Inspired by [awesome-deep-vision](https://github.com/kjw0612/awesome-deep-vision), [awesome-adversarial-machine-learning](https://github.com/yenchenlin/awesome-adversarial-machine-learning), [awesome-deep-learning-papers](https://github.com/terryum/awesome-deep-learning-papers), and [awesome-architecture-search](https://github.com/markdtw/awesome-architecture-search).
### Contributing:
Please feel free to [pull requests](https://github.com/layumi/Awesome-Fools/pulls) or [open an issue](https://github.com/layumi/Awesome-Fools/issues) to add papers.
### Papers:
1. [Adversarial examples in the physical world](http://cn.arxiv.org/abs/1607.02533)
**(ICLR2017 Workshop)**
2. [DeepFool: a simple and accurate method to fool deep neural networks](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Moosavi-Dezfooli_DeepFool_A_Simple_CVPR_2016_paper.pdf)
**(CVPR2016)**
The idea in this work is close to the orginal idea.
Loop until the predicted label change.
3. [Learning with a strong adversary](http://cn.arxiv.org/pdf/1511.03034.pdf)
**(rejected by ICLR2016?)** Apply the spirit of GAN to optimization.
4. [Decision-based Adversarial Attacks: Reliable Attacks Against Black-box Machine Learning Models](http://cn.arxiv.org/pdf/1712.04248.pdf)
**(ICLR2018)** [[code]](https://github.com/bethgelab/foolbox)
5. [The limitations of deep learning in adversarial settings](https://arxiv.org/pdf/1511.07528.pdf) **(ESSP)** (European Symposium on Security & Privacy) Propose SaliencyMapAttack. Do not use loss function.
6. [Generating Natural Adversarial Examples](https://openreview.net/forum?id=H1BLjgZCb¬eId=r1dkEyaSG) **(ICLR2018)**
7. [Simple Black-Box Adversarial Perturbations for Deep Networksh](https://arxiv.org/pdf/1612.06299.pdf) One pixel attack **(CVPR17 Workshop)**
8. [Boosting Adversarial Attacks with Momentum](https://arxiv.org/pdf/1710.06081.pdf) **(CVPR2018 Spotlight)**
9. [Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition](https://www.archive.ece.cmu.edu/~lbauer/papers/2016/ccs2016-face-recognition.pdf) **(CCS2016)** same with the least-likely class
10. [Adversarial examples for semantic image segmentation](https://arxiv.org/abs/1703.01101) **(ICLR2017 Workshop)** same with the classification case.
11. [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572)
**(ICLR2015)** Fast Gradient Sign Method
12. [Open Set Adversarial Examples](https://arxiv.org/abs/1809.02681) Attack Image Retrieval
13. [Ensemble Adversarial Training: Attacks and Defenses](https://openreview.net/forum?id=rkZvSe-RZ) **(ICLR2018)**
14. [Adversarial Manipulation of Deep Representations](https://arxiv.org/abs/1511.05122) **(ICLR2016)** Attack the intermediate activation.
### To Read:
1. [Exploring the space of adversarial images](http://ieeexplore.ieee.org/document/7727230/)
**(IJCNN2016)**
2. [Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083) **(ICLR2018)**
3. [Stochastic Activation Pruning for Robust Adversarial Defense](https://openreview.net/forum?id=H1uR4GZRZ) **(ICLR2018)**
4. [Mitigating Adversarial Effects Through Randomization](https://openreview.net/forum?id=Sk9yuql0Z) **(ICLR2018)**
5. [Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples](https://arxiv.org/abs/1802.00420) **(ICLR2018)** [[Github]](https://github.com/anishathalye/obfuscated-gradients)
### Talks
1. [Ian Speech on CS231n](http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture16.pdf)
### Blogs
1. https://bair.berkeley.edu/blog/2017/12/30/yolo-attack/
### Competition
1. MCS2018: https://github.com/Atmyre/MCS2018_Solution