anitera commited on
Commit
ac61247
·
verified ·
1 Parent(s): b781068

Update README.md

Browse files

# CounterEval: Towards Unifying Evaluation of Counterfactual Explanations

## Overview
CounterEval offers a **human-evaluated** dataset of **30 counterfactual scenarios**, engineered to serve as a benchmark for evaluating a wide range of counterfactual explanation generation frameworks. Each scenario has been strategically designed to exhibit varying levels across multiple explanatory quality metrics, such as **Feasibility**, **Consistency**, **Trust**, **Completeness**, **Fairness**, and **Complexity**, while maintaining high **Understandability** and using **Overall Satisfaction** as a general measure of quality. The scenarios are based on well-known datasets like the Adult dataset and the Pima Indians Diabetes dataset, which include a mix of categorical and continuous data. Additional fabricated scenarios were created to explore extreme values in certain metrics. Each scenario has been evaluated by **206 individuals** recruited through the **Prolific platform**, across all metrics, enabling detailed analysis of how individual explanatory metrics influence the overall satisfaction score.

## Citation
Please cite our work using the following format if you use this dataset:
<div style="font-family: monospace; background-color: #f5f5f5; padding: 10px; border-radius: 4px;">


@misc
{domnich2024unifyingevaluationcounterfactualexplanations,<br>
    title={Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments},<br>
    author={Marharyta Domnich and Julius Valja and Rasmus Moorits Veski and Giacomo Magnifico and Kadi Tulver and Eduard Barbu and Raul Vicente},<br>
    year={2024},<br>
    eprint={2410.21131},<br>
    archivePrefix={arXiv},<br>
    primaryClass={cs.AI},<br>
    url={https://arxiv.org/abs/2410.21131}<br>
}
</div>

## Evaluation Metrics
Below is the detailed table of metrics with their definitions and scales:

| Metric | Definition | Scale |
|----------------------|-------------------------------------------------------------------------------------------------|-------------|
| **Overall Satisfaction** | This scenario effectively explains how to reach a different outcome. | from 1 (very unsatisfied) to 6 (very satisfied) |
| **Feasibility** | The actions suggested by the explanation are practical, realistic to implement, and actionable. | from 1 (very infeasible) to 6 (very easy to do) |
| **Consistency** | All parts of the explanation are logically coherent and do not contradict each other. | from 1 (very inconsistent) to 6 (very consistent) |
| **Completeness** | The explanation is sufficient in explaining the outcome. | from (very incomplete) to 6 (very complete) |
| **Trust** | I believe that the suggested changes would bring about the desired outcome. | from 1 (not at all) to 6 (very much) |
| **Understandability** | I feel like I understood the phrasing of the explanation well. | from 1 (incomprehensible) to 6 (very understandable) |
| **Fairness** | The explanation is unbiased towards different user groups and does not operate on sensitive features. | from 1 (very biased) to 6 (completely fair) |
| **Complexity** | The explanation has an appropriate level of detail and complexity - not too simple, yet not overly complex. | from -2 (too simple) to 0 (ideal complexity) to 2 (too complex) |

## Evaluators
The evaluators were selected from the Prolific platform, ensuring a diverse demographic with varying levels of expertise. The selection was based on the criteria of being at least 18 years old, fluent in English. Here’s an overview of the evaluator demographics and their distribution:

| Demographic | Description | Percentage (%) |
|------------------------------|------------------------------------------------------|----------------|
| **Age Group** | | |
| | 18-24 years old | 31.07 |
| | 25-34 years old | 46.12 |
| | 35-44 years old | 12.14 |
| | 45-54 years old | 7.28 |
| | 55-64 years old | 3.40 |
| **Nationality** | | |
| | South African | 15.53 |
| | Polish | 14.56 |
| | British | 13.11 |
| | Mexican | 9.71 |
| | Portuguese | 9.22 |
| | Chilean | 5.83 |
| | Italian | 3.88 |
| | Greek | 3.40 |
| | Hungarian | 3.40 |
| | USA | 2.91 |
| | Other | 18.45 |
| **Education Level** | | |
| | Primary school or lower | 0.49 |
| | High school | 33.50 |
| | Bachelor's degree or equivalent | 45.15 |
| | Master's degree or equivalent | 18.93 |
| | Doctoral degree or equivalent | 1.94 |
| **English Proficiency** | | |
| | Native speaker / Fully proficient | 80.58 |
| | Moderately proficient | 19.42 |
| **Experience in Machine Learning** | | |
| | No experience | 48.54 |
| | Some experience | 43.20 |
| | I am studying in a related field | 5.83 |
| | I work in the field / Extensive experience | 2.43 |
| **Experience with Causality Frameworks** | | |
| | No | 70.39 |
| | I am familiar with the general concept | 27.67 |
| | I have previous experience with them | 1.94 |
| **Medical Background** | | |
| | No | 87.86 |
| | I am currently studying medicine or a related field | 4.85 |
| | I work with medical data or in a field related to medicine | 3.88 |
| | I have a degree in medicine or a related field | 1.94 |
| | I work in the field of medicine | 1.46 |

### Exclusion criteria
The responses from participants were excluded if they failed any of the following quality checks:

- Attention Check: Hidden questions designed to flag participants who provided inconsistent or invalid responses.
- Response Time: Unusually fast responses indicating the participant may not have carefully read or evaluated the scenarios.
- Average Understandability Time: Unusually fast responses indicating the participant may not have carefully read or evaluated the scenarios.
- Indicator questions answerd: For example, marking infeasible scenarios (e.g., "change place of birth") as feasible.
- Uniform Response Patterns: Participants providing uniform or highly unvaried ratings across all metrics.

## File Descriptions
- `cleaned_cf_dataset.csv`: This file contains the cleaned version of evaluator responses after applying exclusion criteria. Each row corresponds to a Scenario-Metric pair (30 scenarios × 8 metrics = 240 rows). Each column corresponds to a participant's ranking of the given Scenario-Metric pair.
- `full_cf_dataset.csv`: This file contains all evaluator responses before applying exclusion criteria.
- `survey_question.csv`: This file provides a list of all unique counterfactual scenarios used in the study, without duplicates.
- `participant_background.csv`: This file provides background information on the participants who evaluated the counterfactual scenarios. It includes demographic details such as age, nationality, education level, and expertise in related fields.

Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -1,3 +1,14 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - counterfactuals
9
+ - evaluation
10
+ - explainability
11
+ pretty_name: CounterEval
12
+ size_categories:
13
+ - n<1K
14
+ ---