14 - Beyond the Patterns - Isabel Valera - Ethical Machine Learning: Mind the Assumptions!/ClipID:29780 previous clip next clip

The automatic subtitles generated using Whisper Open AI in this video player (and in the Multistream video player) are provided for convenience and accessibility purposes. However, please note that accuracy and interpretation may vary. For more information, please refer to the FAQs (Paragraph 14).
Recording date 2021-02-12

Via

Free

Language

English

Organisational Unit

Lehrstuhl für Informatik 5 (Mustererkennung)

Producer

Friedrich-Alexander-Universität Erlangen-Nürnberg

Format

lecture

It is a great pleasure to announce Isabel Valera from Max Planck Institute (MPI) Saarbrücken as invited speaker at our lab!

Title: “Ethical ML: mind the assumptions”
Speaker: Prof. Dr. Isabel Valera (Saarland University & Max Planck Institute for Software Systems)
Website: https://ivaleram.github.io/

Abstract: As automated data analysis supplements and even replaces human supervision in consequential decision-making (e.g., pretrial bail and loan approval), there are growing concerns from civil organizations, governments, and researchers about potential unfairness and lack of transparency of these algorithmic systems. To address these concerns, the emerging field of ethical machine learning has focused on proposing definitions and mechanisms to ensure the fairness and explicability of the outcomes of these systems. However, as we will discuss in this work, existing solutions are still far from being perfect and encounter significant technical challenges. Specifically, I will show that, in order for ethical ML, it is essential to have a holistic view of the system, starting from the data collection process before training, all the way to the deployment of the system in the real-world. Wrong technical assumptions may indeed come at a high social cost.As an example, I will first  focus on my recent work on  both fair algorithmic decision-making, and algorithmic recourse. In particular, I will show that algorithms may indeed amply the existing unfairness level in the data, if their assumptions do not hold in practice. Then, I will focus on algorithmic recourse, which aims to guide individuals affected by an algorithmic decision system on how to achieve the desired outcome. In this context, I will discuss the inherent limitations of counterfactual explanations, and argue for a shift of paradigm from recourse via nearest counterfactual explanations to recourse through interventions, which directly accounts for the underlying causal structure in the data. Finally, we will then discuss how to achieve recourse in practice when only limited causal information is available.

Short Bio: Prof. Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany).

She is a fellow of the European Laboratory for Learning and Intelligent Systems ( ELLIS), where she is part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit.

Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. Prof. Valera obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK).

This video is released under CC BY 4.0. Please feel free to share and reuse.

For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.

Music Reference: 
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)

More clips in this category "Technische Fakultät"

2024-11-27
IdM-login
protected  
2024-11-28
Studon
protected  
2024-11-27
Passwort / Studon
protected  
2024-11-27
Passwort / Studon
protected  
2024-11-28
Free
public