Join Our Discord (940+ Members)

11 Adversarial Machine Learning Tools and Resources for Robustness Testing, Attacks, and Defense

Discover open source tools and resources for testing the robustness of machine learning models against adversarial attacks.

  • Home /
  • Open Source AI Tools /
  • Adversarial Machine Learning Tools and Resources for Robustness Testing, Attacks, and Defense

Open Source Adversarial ML Tools

  • AdvBox

    A toolbox to generate adversarial examples that fool neural networks in PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, TensorFlow, and Advbox can benchmark the robustness of machine learning models.

    License: Apache License 2.0

  • Adversarial DNN Playground

    think TensorFlow Playground, but for Adversarial Examples! A visualization tool designed for learning and teaching - the attack library is limited in size, but it has a nice front-end to it with buttons you can press!

    License: Apache License 2.0

  • AdverTorch

    library for adversarial attacks / defenses specifically for PyTorch.

    License: GNU Lesser General Public License v3.0

  • CleverHans

    library for testing adversarial attacks / defenses maintained by some of the most important names in adversarial ML, namely Ian Goodfellow (ex-Google Brain, now Apple) and Nicolas Papernot (Google Brain). Comes with some nice tutorials!

    License: MIT License

  • Counterfit

    Counterfit is a command-line tool and generic automation layer for assessing the security of machine learning systems.

    License: MIT License

  • DEEPSEC

    another systematic tool for attacking and defending deep learning models.

    License: MIT License

  • Foolbox

    second biggest adversarial library. Has an even longer list of attacks - but no defenses or evaluation metrics. Geared more towards computer vision. Code easier to understand / modify than ART - also better for exploring blackbox attacks on surrogate models.

    License: MIT License

  • Adversarial Robustness Toolbox (ART))

    ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.

    License: MIT License

  • MIA

    A library for running membership inference attacks (MIA) against machine learning models.

    License: MIT License

  • TextFool

    plausible looking adversarial examples for text generation.

    License: MIT License

  • Trickster

    Library and experiments for attacking machine learning in discrete domains using graph search.

    License: MIT License

Last Updated: Dec 26, 2023