Fairness

Package that provides interfaces and built-in implementations for evaluating the fairness of models and datasets.

Metrics

AutoMLx outlines a set of bias/fairness metrics, based on developments in the ML fairness community [1], to assess and measure if a model/dataset complies with a specific metric. The provided metrics all correspond to different notions of fairness, from which the user should carefully select while taking into account their application’s context.

The metrics each implement different criteria defining how a model or dataset should be unbiased toward a protected attribute. If an attribute is protected, then each of its unique values (for example, “male”, “female” or “other”) are considered subgroups that should be protected in some way so as to have equal outcomes from the model. These types of fairness metrics are known as group fairness metrics.

We provide a table summarizing the fairness metrics in the AutoMLx package. Choosing the right fairness metric for a particular application is critical; it requires domain knowledge of the complete sociotechnical system. Moreover, different metrics bring in different perspectives and sometimes the data/model might need to be analyzed for multiple fairness metrics. Therefore, this choice is based on a combination of the domain, task at hand, societal impact of model predictions, policies and regulations, legal considerations, etc. and cannot be fully automated. However, we hope that the table below will help give some insights into which fairness metric is best for your application.

Machine learning models that decide outcomes affecting individuals can either be assistive or punitive. For example, a model that classifies whether or not a job applicant should be interviewed is assistive, because the model is screening for individuals that should receive a positive outcome. In contrast, a model that classifies loan applicants as high risk is punitive, because the model is screening for individuals that should receive a negative outcome. For models used in assistive applications, it is typically important to minimize false negatives (for example, to ensure individuals who deserve to be interviewed are interviewed), whereas in punitive applications, it is usually important to minimize false positives (for example, to avoid denying loans to individuals that have low credit risk). In the spirit of fairness, one should therefore aim to minimize the disparity in false negative rates across protected groups in assistive applications whilst minimizing the disparity in false positive rates for punitive applications. In the following table, we have classified each metric based on whether or not it is most appropriate for models used in assistive or punitive applications (or both). For further explanations, please refer to this book .

Metric

Dataset

Model

Punitive

Assistive

Perfect score means

Consistency

NA

NA

Neighbors (k-means) have the same labels

Smoothed EDF

NA

NA

Sub-populations have equal probability of positive label (with log scaling of deviation)

Statistical Parity

Sub-populations have equal probability of positive prediction

True Positive Rates

Sub-populations have equal probability of positive prediction when their true label is positive

False Positive Rates

Sub-populations have equal probability of positive prediction when their true label is negative

False Negative Rates

Sub-populations have equal probability of negative prediction when their true label is positive

False Omission Rates

Sub-populations have equal probability of a positive true label when their prediction is negative

False Discovery Rates

Sub-populations have equal probability of a negative true label when their prediction is positive

Equalized Odds

Sub-populations have equal true positive rate and equal false positive rate

Error Rates

Sub-populations have equal probability of a false prediction

Theil Index

Error rates are the same for sub-populations and whole population (deviations are measured using entropy).

[1] Moritz Hardt et al. “Fairness and Machine Learning: Limitations and Opportunities”. 2019.

For maximal versatility, all supported metrics are offered under two formats:

  1. A scikit-learn-like Scorer object which can be initialized and reused to test different models or datasets.

  2. A functional interface which can easily be used for one-line computations.

Evaluating a Model

Statistical Parity

class ModelStatisticalParityScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measure the statistical parity [1] of a model’s output between subgroups and the rest of the population.

Statistical parity (also known as Base Rate or Disparate Impact) states that a predictor is unbiased if the prediction is independent of the protected attribute.

Statistical Parity is calculated as PP / N, where PP and N are the number of Positive Predictions and total Number of predictions made, respectively.

Perfect score

A perfect score for this metric means that the model does not predict positively any of the subgroups at a different rate than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect statistical parity would mean that all combinations of values for race and sex have identical ratios of positive predictions. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

References

[1] Cynthia Dwork et al. “Fairness Through Awareness”. Innovations in Theoretical Computer Science. 2012.

Examples

from automlx.fairness.metrics import ModelStatisticalParityScorer

scorer = ModelStatisticalParityScorer(['race', 'sex'])
scorer(model, X, y_true)

This metric does not require y_true . It can also be called using

scorer(model, X)
__call__ ( model , X , y_true = None , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list , or None , default=None ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame , or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

model_statistical_parity ( y_true = None , y_pred = None , subgroups = None , distance_measure = 'diff' , reduction = 'mean' )

Measure the statistical parity of a model’s output between subgroups and the rest of the population.

For more details, refer to ModelStatisticalParityScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list or None , default=None ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list or None , default=None ) – Array of model predictions.

  • subgroups ( pandas.DataFrame or None , default=None ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Raises :

AutoMLxValueError – If Value of None is received for either y_pred or subgroups .

Examples

from automlx.fairness.metrics import model_statistical_parity
subgroups = X[['race', 'sex']]
model_statistical_parity(y_true, y_pred, subgroups)

This metric does not require y_true . It can also be called using

model_statistical_parity(None, y_pred, subgroups)
model_statistical_parity(y_pred=y_pred, subgroups=subgroups)

True Positive Rate Disparity

class TruePositiveRateScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s true positive rate between subgroups and the rest of the population (also known as equal opportunity).

For each subgroup, the disparity is measured by comparing the true positive rate on instances of a subgroup against the rest of the population.

True Positive Rate [1] (also known as TPR, recall, or sensitivity) is calculated as TP / (TP + FN), where TP and FN are the number of true positives and false negatives, respectively.

Perfect score

A perfect score for this metric means that the model does not correctly predict the positive class for any of the subgroups more often than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect true positive rate disparity would mean that all combinations of values for race and sex have identical true positive rates. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

References

[1] Moritz Hardt et al. “Equality of Opportunity in Supervised Learning”. Advances in Neural Information Processing Systems. 2016.

Examples

from automlx.fairness.metrics import TruePositiveRateScorer
scorer = TruePositiveRateScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

true_positive_rate ( y_true , y_pred , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s true positive rate between subgroups and the rest of the population.

For more details, refer to TruePositiveRateScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Examples

from automlx.fairness.metrics import true_positive_rate
subgroups = X[['race', 'sex']]
true_positive_rate(y_true, y_pred, subgroups)

False Positive Rate Disparity

class FalsePositiveRateScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false positive rate between subgroups and the rest of the population.

For each subgroup, the disparity is measured by comparing the false positive rate on instances of a subgroup against the rest of the population.

False Positive Rate [1] (also known as FPR or fall-out) is calculated as FP / (FP + TN), where FP and TN are the number of false positives and true negatives, respectively.

Perfect score

A perfect score for this metric means that the model does not incorrectly predict the positive class for any of the subgroups more often than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect false positive rate disparity would mean that all combinations of values for race and sex have identical false positive rates. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

References

[1] `Alexandra Chouldechova. "Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments". Big Data (2016). <https://www.liebertpub.com/doi/10.1089/big.2016.0047`_

Examples

from automlx.fairness.metrics import FalsePositiveRateScorer
scorer = FalsePositiveRateScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

false_positive_rate ( y_true , y_pred , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false positive rate between subgroups and the rest of the population.

For more details, refer to FalsePositiveRateScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Examples

from automlx.fairness.metrics import false_positive_rate
subgroups = X[['race', 'sex']]
false_positive_rate(y_true, y_pred, subgroups)

False Negative Rate Disparity

class FalseNegativeRateScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false negative rate between subgroups and the rest of the population.

For each subgroup, the disparity is measured by comparing the false negative rate on instances of a subgroup against the rest of the population.

False Negative Rate [1] (also known as FNR or miss rate) is calculated as FN / (FN + TP), where FN and TP are the number of false negatives and true positives, respectively.

Perfect score

A perfect score for this metric means that the model does not incorrectly predict the negative class for any of the subgroups more often than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect false negative rate disparity would mean that all combinations of values for race and sex have identical false negative rates. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

References

[1] Alexandra Chouldechova. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments”. Big Data (2016).

Examples

from automlx.fairness.metrics import FalseNegativeRateScorer
scorer = FalseNegativeRateScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

false_negative_rate ( y_true , y_pred , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false negative rate between subgroups and the rest of the population.

For more details, refer to FalseNegativeRateScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Examples

from automlx.fairness.metrics import false_negative_rate
subgroups = X[['race', 'sex']]
false_negative_rate(y_true, y_pred, subgroups)

False Omission Rate Disparity

class FalseOmissionRateScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false omission rate between subgroups and the rest of the population.

For each subgroup, the disparity is measured by comparing the false omission rate on instances of a subgroup against the rest of the population.

False Omission Rate (also known as FOR) is calculated as FN / (FN + TN), where FN and TN are the number of false negatives and true negatives, respectively.

Perfect score

A perfect score for this metric means that the model does not make more mistakes on the negative class for any of the subgroups more often than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect false omission rate disparity would mean that all combinations of values for race and sex have identical false omission rates. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Examples

from automlx.fairness.metrics import FalseOmissionRateScorer
scorer = FalseOmissionRateScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

false_omission_rate ( y_true , y_pred , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false omission rate between subgroups and the rest of the population.

For more details, refer to FalseOmissionRateScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Examples

from automlx.fairness.metrics import false_omission_rate
subgroups = X[['race', 'sex']]
false_omission_rate(y_true, y_pred, subgroups)

False Discovery Rate Disparity

class FalseDiscoveryRateScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false discovery rate between subgroups and the rest of the population.

For each subgroup, the disparity is measured by comparing the false discovery rate on instances of a subgroup against the rest of the population.

False Discovery Rate (also known as FDR) is calculated as FP / (FP + TP), where FP and TP are the number of false positives and true positives, respectively.

Perfect score

A perfect score for this metric means that the model does not make more mistakes on the positive class for any of the subgroups more often than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect false discovery rate disparity would mean that all combinations of values for race and sex have identical false discovery rates. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Examples

from automlx.fairness.metrics import FalseDiscoveryRateScorer
scorer = FalseDiscoveryRateScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

false_discovery_rate ( y_true , y_pred , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s false discovery rate between subgroups and the rest of the population.

For more details, refer to FalseDiscoveryRateScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Examples

from automlx.fairness.metrics import false_discovery_rate
subgroups = X[['race', 'sex']]
false_discovery_rate(y_true, y_pred, subgroups)

Error Rate Disparity

class ErrorRateScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s error rate between subgroups and the rest of the population.

For each subgroup, the disparity is measured by comparing the error rate on instances of a subgroup against the rest of the population.

Error Rate (also known as inaccuracy) is calculated as (FP + FN) / N, where FP and FN are the number of false positives and false negatives, respectively, while N is the total Number of instances.

Perfect score

A perfect score for this metric means that the model does not make more mistakes for any of the subgroups more often than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect error rate disparity would mean that all combinations of values for race and sex have identical error rates. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Examples

from automlx.fairness.metrics import ErrorRateScorer
scorer = ErrorRateScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

error_rate ( y_true , y_pred , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s error rate between subgroups and the rest of the population.

For more details, refer to ErrorRateScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Examples

from automlx.fairness.metrics import error_rate
subgroups = X[['race', 'sex']]
error_rate(y_true, y_pred, subgroups)

Equalized Odds

class EqualizedOddsScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s true positive and false positive rates between subgroups and the rest of the population.

The disparity is measured by comparing the true positive and false positive rates on instances of a subgroup against the rest of the population.

True Positive Rate (also known as TPR, recall, or sensitivity) is calculated as TP / (TP + FN), where TP and FN are the number of true positives and false negatives, respectively.

False Positive Rate (also known as FPR or fall-out) is calculated as FP / (FP + TN), where FP and TN are the number of false positives and true negatives, respectively.

Equalized Odds [1] is computed by taking the maximum distance between TPR and FPR for a subgroup against the rest of the population.

Perfect score

A perfect score for this metric means that the model has the same TPR and FPR when comparing a subgroup to the rest of the population. For example, if the protected attributes are race and sex, then a perfect Equalized Odds disparity would mean that all combinations of values for race and sex have identical TPR and FPR. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

References

[1] Moritz Hardt et al. “Equality of Opportunity in Supervised Learning”. Advances in Neural Information Processing Systems. 2016.

Examples

from automlx.fairness.metrics import EqualizedOddsScorer
scorer = EqualizedOddsScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

equalized_odds ( y_true , y_pred , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the disparity of a model’s true positive and false positive rates between subgroups and the rest of the population.

For more details, refer to EqualizedOddsScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Examples

from automlx.fairness.metrics import equalized_odds
subgroups = X[['race', 'sex']]
equalized_odds(y_true, y_pred, subgroups)

Theil Index

class TheilIndexScorer ( protected_attributes , distance_measure = None , reduction = 'mean' )

Measures the disparity of a model’s predictions according to groundtruth labels, as proposed by Speicher et al. [1].

Intuitively, the Theil Index can be thought of as a measure of the divergence between a subgroup’s different error distributions (i.e. false positives and false negatives) against the rest of the population.

Perfect score

The perfect score for this metric is 0, meaning that the model does not have a different error distribution for any subgroup when compared to the rest of the population. For example, if the protected attributes are race and sex, then a perfect Theil Index disparity would mean that all combinations of values for race and sex have identical error distributions.

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str or None , default=None ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

References

[1]: ` Speicher, Till, et al. “A unified approach to quantifying algorithmic

unfairness: Measuring individual & group unfairness via inequality indices.” Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018. < https://arxiv.org/abs/1807.00787 >`_

Examples

from automlx.fairness.metrics import TheilIndexScorer
scorer = TheilIndexScorer(['race', 'sex'])
scorer(model, X, y_true)
__call__ ( model , X , y_true , supplementary_features = None )

Compute the metric using a model’s predictions on a given array of instances X .

Parameters :
  • model ( Any ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model).

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError

  • if a feature is present in both X and supplementary_features .

theil_index ( y_true , y_pred , subgroups , distance_measure = None , reduction = 'mean' )

Measures the disparity of a model’s predictions according to groundtruth labels, as proposed by Speicher et al. [1].

For more details, refer to TheilIndexScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels.

  • y_pred ( pandas.Series , numpy.ndarray , list ) – Array of model predictions.

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str or None , default=None ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Returns :

The computed metric value, with format according to reduction .

Return type :

float , dict

Raises :

AutoMLxValueError – If distance_measure values are given to Theil Index.

References

[1]: ` Speicher, Till, et al. “A unified approach to quantifying algorithmic

unfairness: Measuring individual & group unfairness via inequality indices.” Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018. < https://arxiv.org/abs/1807.00787 >`_

Examples

from automlx.fairness.metrics import theil_index
subgroups = X[['race', 'sex']]
theil_index(y_true, y_pred, subgroups)

Evaluating a Dataset

Statistical Parity

class DatasetStatisticalParityScorer ( protected_attributes , distance_measure = 'diff' , reduction = 'mean' )

Measures the statistical parity [1] of a dataset. Statistical parity (also known as Base Rate or Disparate Impact) for a dataset states that a dataset is unbiased if the label is independent of the protected attribute.

For each subgroup, statistical parity is computed as the ratio of positive labels in a subgroup.

Statistical Parity (also known as Base Rate or Disparate Impact) is calculated as PL / N, where PL and N are the number of Positive Labels and total number of instances, respectively.

Perfect score

A perfect score for this metric means that the dataset does not have a different ratio of positive labels for a subgroup than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect statistical parity would mean that all combinations of values for race and sex have identical ratios of positive labels. Perfect values are:

  • 1 if using 'ratio' as distance_measure .

  • 0 if using 'diff' as distance_measure .

Parameters :
  • protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str or None , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

References

[1] Cynthia Dwork et al. “Fairness Through Awareness”. Innovations in Theoretical Computer Science. 2012.

Examples

from automlx.fairness.metrics import DatasetStatisticalParityScorer
scorer = DatasetStatisticalParityScorer(['race', 'sex'])
scorer(X=X, y_true=y_true)
scorer(None, X, y_true)
__call__ ( model = None , X = None , y_true = None , supplementary_features = None )

Compute the metric on a given array of instances X .

Parameters :
  • model ( object or None , default=None ) – Object that implements a predict(X) function to collect categorical predictions.

  • X ( pandas.DataFrame or None , default=None ) – Array of instances to compute the metric on.

  • y_true ( pandas.Series , numpy.ndarray , list or None , default=None ) – Array of groundtruth labels.

  • supplementary_features ( pandas.DataFrame , or None , default=None ) – Array of supplementary features for each instance. Used in case one attribute in self.protected_attributes is not contained by X (e.g. if the protected attribute is not used by the model). Raise an AutoMLxValueError if a feature is present in both X and supplementary_features .

Returns :

The computed metric value, with format according to self.reduction .

Return type :

float , dict

Raises :

AutoMLxValueError – If a feature is present in both X and supplementary_features .

dataset_statistical_parity ( y_true , subgroups , distance_measure = 'diff' , reduction = 'mean' )

Measures the statistical parity of a dataset.

For more details, refer to DatasetStatisticalParityScorer .

Parameters :
  • y_true ( pandas.Series , numpy.ndarray , list ) – Array of groundtruth labels

  • subgroups ( pandas.DataFrame ) – Dataframe containing protected attributes for each instance.

  • distance_measure ( str , default='diff' ) –

    Determines the distance used to compare a subgroup’s metric against the rest of the population. Possible values are:

    • 'ratio' : Uses (subgroup_val / rest_of_pop_val) .

    Inverted to always be >= 1 if needed. * 'diff' : Uses | subgroup_val - rest_of_pop_val | .

  • reduction ( str , default='mean' ) –

    Determines how to reduce scores on all subgroups to a single output. Possible values are:

    • 'max' : Returns the maximal value among all subgroup metrics.

    • 'mean' : Returns the mean over all subgroup metrics.

    • None : Returns a {subgroup: subgroup_metric, ...} dict.

Examples

from automlx.fairness.metrics import dataset_statistical_parity
subgroups = X[['race', 'sex']]
dataset_statistical_parity(y_true, subgroups)

Consistency

class ConsistencyScorer ( protected_attributes )

Measures the consistency of a dataset.

Consistency is measured as the number of ratio of instances that have a different label from the k=5 nearest neighbors.

Perfect score

A perfect score for this metric is 0, meaning that the dataset does not have different labels for instances that are similar to one another.

Parameters :

protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

Examples

from automlx.fairness.metrics import ConsistencyScorer
scorer = ConsistencyScorer(['race', 'sex'])
scorer(X=X, y_true=y_true)
scorer(None, X, y_true)
__call__ ( model = None , X = None , y_true = None , supplementary_features = None )

Call self as a function.

consistency ( y_true , subgroups )

Measures the consistency of a dataset.

For more details, refer to ConsistencyScorer .

Parameters :

Examples

from automlx.fairness.metrics import consistency
subgroups = X[['race', 'sex']]
consistency(y_true, subgroups)

Smoothed EDF

class SmoothedEDFScorer ( protected_attributes )

Measures the smoothed Empirical Differential Fairness (EDF) of a dataset, as proposed by Foulds et al. [1].

Smoothed EDF returns the minimal exponential deviation of positive target ratios comparing a subgroup to the rest of the population.

This metric is related to DatasetStatisticalParity with reduction=’max’ and distance_measure=’ratio’ , with the only difference being that SmoothedEDFScorer returns a logarithmic value instead.

Perfect score

A perfect score for this metric is 0, meaning that the dataset does not have a different ratio of positive labels for a subgroup than it does for the rest of the population. For example, if the protected attributes are race and sex, then a perfect smoothed EDF would mean that all combinations of values for race and sex have identical ratios of positive labels.

Parameters :

protected_attributes ( pandas.Series , numpy.ndarray , list , str ) – Array of attributes or single attribute that should be treated as protected. If an attribute is protected, then all of its unique values are considered as subgroups.

References

[1] ` Foulds, James R., et al. “An intersectional definition of fairness.”

2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 2020. < https://arxiv.org/abs/1807.08362 >`_

Examples

from automlx.fairness.metrics import SmoothedEDFScorer
scorer = SmoothedEDFScorer(['race', 'sex'])
scorer(X=X, y_true=y_true)
scorer(None, X, y_true)
__call__ ( model = None , X = None , y_true = None , supplementary_features = None )

Call self as a function.

smoothed_edf ( y_true , subgroups )

Measures the smoothed Empirical Differential Fairness (EDF) of a dataset, as proposed by Foulds et al. [1].

For more details, refer to SmoothedEDFScorer .

Parameters :

References

[1] ` Foulds, James R., et al. “An intersectional definition of fairness.”

2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 2020. < https://arxiv.org/abs/1807.08362 >`_

Examples

from automlx.fairness.metrics import smoothed_edf
subgroups = X[['race', 'sex']]
smoothed_edf(y_true, subgroups)

Bias Mitigation

AutoMLx fairness bias mitigation module

class ModelBiasMitigator ( base_estimator , protected_attribute_names , fairness_metric , accuracy_metric , higher_fairness_is_better = 'auto' , higher_accuracy_is_better = 'auto' , fairness_metric_uses_probas = 'auto' , accuracy_metric_uses_probas = 'auto' , constraint_target = 'accuracy' , constraint_type = 'relative' , constraint_value = 0.05 , base_estimator_uses_protected_attributes = True , n_trials_per_group = 100 , time_limit = None , subsampling = 50000 , regularization_factor = 0.001 , favorable_label_idx = 1 , random_seed = 0 )

Class to mitigate the bias of an already fitted machine learning model.

The mitigation procedure works by multiplying the majority class label by a different scalar for every population subgroup and then rescaling the prediction probabilities, producing tweaked label probabilities.

The different multiplying scalars are searched in order to find the best possible trade-offs between any fairness and accuracy metrics passed as input.

This object produces a set of optimal fairness-accuracy trade-offs, which can be visualized using the show_tradeoff method.

A default best multiplier is selected according to parametrizable input constraints. It is possible to select any other multiplier on the trade-off using the select_model method and inputting the index of the preferred multiplier, as shown when hovering over multipliers in show_tradeoff .

Parameters :
  • base_estimator ( model object ) – The base estimator on which we want to mitigate bias.

  • protected_attribute_names ( str , List [ str ] ) – The protected attribute names to use to compute fairness metrics. These should always be a part of any input dataset passed.

  • fairness_metric ( str , callable ) –

    The fairness metric to mitigate bias for.

    • If str, it is the name of the scoring metric. Available metrics are: [statistical_parity’, ‘TPR’, ‘FPR’, ‘FNR’, ‘FOR’, ‘FDR’, ‘error_rate’, ‘equalized_odds’, ‘theil_index]

    • If callable, it has to have the

      fairness_metric(y_true, y_pred, subgroups) signature.

  • accuracy_metric ( str , callable ) –

    The accuracy metric to optimize for while mitigating bias.

    • If str, it is the name of the scoring metric. Available metrics are: [neg_log_loss’, ‘roc_auc’, ‘accuracy’, ‘f1’, ‘precision’, ‘recall’, ‘f1_micro’, ‘f1_macro’, ‘f1_weighted’, ‘f1_samples’, ‘recall_micro’, ‘recall_macro’, ‘recall_weighted’, ‘recall_samples’, ‘precision_micro’, ‘precision_macro’, ‘precision_weighted’, ‘precision_samples]

    • If callable, it has to have the

      accuracy_metric(y_true, y_pred) signature.

  • higher_fairness_is_better ( bool , 'auto' , default='auto' ) – Whether a higher fairness score with respect to fairness_metric is better. Needs to be set to “auto” if fairness_metric is a str, in which case it is set automatically.

  • higher_accuracy_is_better ( bool , 'auto' , default='auto' ) – Whether a higher accuracy score with respect to accuracy_metric is better. Needs to be set to “auto” if accuracy_metric is a str, in which case it is set automatically.

  • fairness_metric_uses_probas ( bool , 'auto' , default='auto' ) – Whether or not the fairness metric should be given label probabilities or actual labels as input. Needs to be set to “auto” if fairness_metric is a str, in which case it is set automatically.

  • accuracy_metric_uses_probas ( bool , 'auto' , default='auto' ) – Whether or not the accuracy metric should be given label probabilities or actual labels as input. Needs to be set to “auto” if accuracy_metric is a str, in which case it is set automatically.

  • constraint_target ( str , default='accuracy' ) – On which metric should the constraint be applied for default model selection. Possible values are 'fairness' and 'accuracy' .

  • constraint_type ( str , default='relative' ) –

    Which type of constraint should be used to select the default model. Possible values are:

    • 'relative' : Apply a constraint relative to the best found

      models. A relative constraint on accuracy with F1 metric would look at the best F1 model found and tolerate a constraint_value relative deviation to it at max, returning the model with the best fairness within that constraint.

    • 'absolute' : Apply an absolute constraint to best found

      models. An absolute constraint on fairness with Equalized Odds metric would only consider models with Equalized Odds below constraint_value , returning the model with the best accuracy within that constraint.

  • constraint_value ( float , default=0.05 ) – What value to apply the constraint with when selecting the default model. Look at constraint_type ’s documentation for more details.

  • base_estimator_uses_protected_attributes ( bool , default=True ) –

    Whether or not base_estimator uses the protected attributes for inference. If set to False , protected attributes will be removed from any input dataset before being collecting predictions from

    base_estimator .

  • n_trials_per_group ( int , default=100 ) – Number of different multiplying scalars to consider. Scales linearly with the number of groups in the data, i.e. n_trials = n_trials_per_group * n_groups . When both n_trials_per_group and time_limit are specified, the first occurrence will stop the search procedure.

  • time_limit ( float or None , default=None ) – Number of seconds to spend in search at maximum. None value means no time limit is set. When both n_trials_per_group and time_limit are specified, the first occurrence will stop the search procedure.

  • subsampling ( int , default=50000 ) – The number of rows to subsample the dataset to when tuning. This parameter drastically improves running time on large datasets with little decrease in overall performance. Can be deactivated by passing numpy.inf .

  • regularization_factor ( float , default=0.001 ) – The amount of regularization to be applied when selecting multipliers.

  • favorable_label_idx ( int , default=1 ) – Index of the favorable label to use when computing metrics.

  • random_seed ( int , default=0 ) – Random seed to ensure reproducible outcome.

tradeoff_summary_

DataFrame containing the optimal fairness-accuracy trade-off models with only the most relevant information.

Type :

pd.DataFrame

selected_multipliers_idx_

Index of the currently selected model for self._best_trials_detailed .

Type :

int

selected_multipliers_

DataFrame containing the multipliers for each sensitive group that are currently used for inference.

Type :

pd.DataFrame

constrained_metric_

Name of the metric on which the constraint is applied.

Type :

str

unconstrained_metric_

Name of the metric on which no constraint is applied.

Type :

str

constraint_criterion_value_

Value of the constraint being currently applied.

Type :

float

Raises :

AutoMLxTypeError , AutoMLxValueError – Will be raised when one input argument is invalid

Examples

from automlx.fairness.bias_mitigation import ModelBiasMitigator

bias_mitigated_model = ModelBiasMitigator(model,
                                       protected_attribute_names='sex',
                                       fairness_metric='equalized_odds',
                                       accuracy_metric='balanced_accuracy')

# Scikit-learn like API supported
bias_mitigated_model.fit(X_val, y_val)
y_pred_proba = bias_mitigated_model.predict_proba(X_test)
y_pred_labels = bias_mitigated_model.predict(X_test)

# Use show_tradeoff() to display all available models
bias_mitigated_model.show_tradeoff()

# Can select a specific model manually
bias_mitigated_model.select_model(1)

# Predictions are now made with new model
y_pred_proba = bias_mitigated_model.predict_proba(X_test)
y_pred_labels = bias_mitigated_model.predict(X_test)
fit ( X , y )

Apply bias mitigation to the base estimator given a dataset and labels.

Note that it is highly recommended you use a validation set for this method, so as to have a more representative range of probabilities for the model instead of the potentially skewed probabilities on training samples.

Parameters :
  • X ( pd.DataFrame ) – The dataset on which to mitigate the estimator’s bias.

  • y ( pd.DataFrame , pd.Series , np.ndarray ) – The labels for which to mitigate the estimator’s bias.

Returns :

self – The fitted ModelBiasMitigator object.

Return type :

ModelBiasMitigator

Raises :

AutoMLxValueError – Raised when an invalid value is encountered.

predict ( X )

Predict class for input dataset X.

Parameters :

X ( pd.DataFrame ) – The dataset for which to collect labels.

Returns :

labels – The labels for every sample.

Return type :

np.ndarray

predict_proba ( X )

Predict class probabilities for input dataset X.

Parameters :

X ( pd.DataFrame ) – The dataset for which to collect label probabilities.

Returns :

probabilities – The label probabilities for every sample.

Return type :

np.ndarray

select_model ( model_idx )

Select the multipliers to use for inference.

Parameters :

model_idx ( int ) – The index of the multipliers in self.best_trials_ to use for inference, as displayed by show_tradeoff .

Raises :

AutoMLxValueError – Raised when the passed model_idx is invalid.

show_tradeoff ( hide_inadmissible = False )

Show the models representing the best fairness-accuracy trade-off found.

Parameters :

hide_inadmissible ( bool , default=False ) – Whether or not to hide the models that don’t satisfy the constraint.