Master's thesis in Data & AI: Automated Practicality Evaluation of Counterfactual Explanations Using GPT
Branche | Zie onder |
Dienstverband | Zie onder |
Uren | Zie onder |
Locatie | Veenendaal |
Salarisindicaties | 0-5.000 |
Opleidingsniveau | Zie onder |
Organisatie | Info Support |
Contactpersoon |
Info Support Nederland 0318552020 |
Informatie
- XAI/Explainable AI
- LLMs/Large Language Models
- ChatGPT
- NLP/Natural Language Processing
- A challenging assignment within a practical environment
- € 1000 compensation, € 500 + lease car or € 600 + living space
- Professional guidance
- Courses aimed at your graduation period
- Support from our academic Research center at your disposal
- Two vacation days per month
- 65% Research
- 10% Analyze, design, realize
- 25% Documentation
Omschrijving
- XAI/Explainable AI
- LLMs/Large Language Models
- ChatGPT
- NLP/Natural Language Processing
- A challenging assignment within a practical environment
- € 1000 compensation, € 500 + lease car or € 600 + living space
- Professional guidance
- Courses aimed at your graduation period
- Support from our academic Research center at your disposal
- Two vacation days per month
- 65% Research
- 10% Analyze, design, realize
- 25% Documentation
Functie eisen
Counterfactual explanations are considered an intuitive and user-friendly form of explainable AI. A counterfactual explanation proposes minimal changes to the input data that would lead to a different model output. As an example, a bank could make use of a model to determine if a customer can get a loan or not. If a customer was denied a loan, a counterfactual explanation could tell them that, for instance, “if their savings were increased by 10.000€, they could get the loan”.
However, counterfactual explanations are not aware of the semantics of the input features. It might be that the minimal change for a customer to get a loan is to “increase their age from 25 to 35 years old” which, from a human perspective, is evidently unfeasible in a short time span.
Assignment
In this project, we would like you to design a new evaluation metric to automatically determine the practicality of counterfactuals, that is, how attainable a counterfactual explanation is (or if it is even humanly attainable). Since the determination of practicality involves semantic understanding of the input features, our idea would be to leverage GPT and/or other large language model-based technologies to accomplish this.
A user study could be conducted initially following the methodology proposed by Spreitzer et al (1) to evaluate the practicality of different state-of-the-art counterfactual techniques. The user study can then be extended to evaluate the accuracy of the proposed metric compared to the human evaluations.
(1) Spreitzer, H. Haned, & I. van der Linden, “Evaluating the Practicality of Counterfactual Explanations”