Open Source Explainability Tools
An open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools.
License: MIT License
Interpretability and explainability of data and machine learning models including a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.
License: Apache License 2.0
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
License: Apache License 2.0
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.
License: Apache License 2.0
Code for the paper “High precision model agnostic explanations” , a model-agnostic system that explains the behaviour of complex models with high-precision rules called anchors.
License: BSD 2-Clause "Simplified" License
model interpretability and understanding library for PyTorch developed by Facebook. It contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models.
License: BSD 3-Clause "New" or "Revised" License
Example of using classifier-agnostic saliency map extraction on ImageNet presented on the paper “Classifier-agnostic saliency map extraction” .
License: BSD 3-Clause "New" or "Revised" License
An adversarial example library for constructing attacks, building defenses, and benchmarking both. A python library to benchmark system’s vulnerability to adversarial examples .
License: MIT License
Python script for model agnostic contrastive/counterfactual explanations for machine learning. Accompanying code for the paper “Contrastive Explanations with Local Foil Trees” .
License: BSD 3-Clause "New" or "Revised" License
Codebase that contains the methods in the paper “Learning important features through propagating activation differences” . Here is the slides and the video of the 15 minute talk given at ICML.
License: MIT License
“Explain Like I’m 5” is a Python package which helps to debug machine learning classifiers and explain their predictions.
License: MIT License
Facets contains two robust visualizations to aid in understanding and analyzing machine learning datasets. Get a sense of the shape of each feature of your dataset using Facets Overview, or explore individual observations using Facets Dive.
License: Apache License 2.0
Fairlearn is a python toolkit to assess and mitigate unfairness in machine learning models.
License: MIT License
This repository is meant to facilitate the benchmarking of fairness aware machine learning algorithms based on this paper .
License: Other
The tool supports teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit.
License: Apache License 2.0
An attention-based summarized post-hoc explanations for detection and identification of bias in data. We propose a global explanation and introduce a step-by-step framework on how to detect and test bias. Python package for image data.
License: No License
An open-source library for analyzing Keras models visually by methods such as DeepTaylor-Decomposition , PatternNet , Saliency Maps , and Integrated Gradients .
License: Other
This repository provides code for implementing integrated gradients for networks with image inputs.
License: No License
InterpretML is an open-source package for training interpretable models and explaining blackbox systems.
License: Unknown
keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include: Activation maximization, Saliency maps, Class activation maps.
License: MIT License
Code for replicating the experiments in the paper “Learning to Explain: An Information-Theoretic Perspective on Model Interpretation” at ICML 2018.
License: No License
A python framework for self-supervised learning on images. The learned representations can be used to analyze the distribution in unlabeled data and rebalance datasets.
License: MIT License
LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.
License: MIT License
MindsDB is an Explainable AutoML framework for developers. With MindsDB you can build, train and use state of the art ML models in as simple as one line of code.
License: Other
A Python package for AutoML on tabular data with feature engineering, hyper-parameters tuning, explanations and automatic documentation.
License: MIT License
Viewer for neural network, deep learning and machine learning models.
License: MIT License
A model agnostic tool for decomposition of predictions from black boxes. Break Down Table shows contributions of every variable to a final prediction.
License: Other
Toolkit for auditing and mitigating bias and fairness of machine learning systems
License: MIT License
SHapley Additive exPlanations is a unified approach to explain the output of any machine learning model.
License: MIT License
Shapash is a Python library that provides several types of visualization that display explicit labels that everyone can understand.
License: Apache License 2.0
TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer.
License: Apache License 2.0
Package for interpreting scikit-learn’s decision tree and random forest predictions. Allows decomposing each prediction into bias and feature contribution components as described here .
License: BSD 3-Clause "New" or "Revised" License
An easy-to-use interface for expanding understanding of a black-box classification or regression ML model.
License: Apache License 2.0
An eXplainability toolbox for machine learning.
License: MIT License
Last Updated: Dec 26, 2023