Social Republic.
This is a large federal company that is....
pretty common honoring dancers in 1990.
And Andrea O rebirth in existed just as he did in the 40s.
And probably be doing this
all day and hearing each day about your stages.
It guards the world today
and never goes away.
Find out how to get in these 2
to 3 minutes.
for Learning and Intelligence Systems, ELIS, where she is part of the Robust Machine Learning
Program and the Saarbrücken Artificial Intelligence and Machine Learning Unit.
Prior to this, she was an independent group leader at the MPI for Intelligence Systems
in Tübingen, Germany.
She has held a German Humboldt Postdoctoral Fellowship and a Minerva Fastrock Fellowship
from the Max Planck Society.
Professor Valera obtained her PhD in 2014 and Master of Science degrees in 2012 from
the University Carlos VIII in Madrid, Spain and worked as a postdoctoral researcher at
the MPI for Software Systems Germany and at the University of Cambridge in the United
Kingdom.
Today, it's a great pleasure to have you here and she will present on ethical machine learning
mind the assumptions.
Isabelle, I'm very glad to have you here and I'm very much looking forward to your presentation.
Thank you very much for inviting me to be here today.
Yes, and for the kind introduction and all my bio where I moved a lot.
I hope I have settled a bit for now.
And as we were talking today, we are going to talk about machine learning and some ethical aspects.
In particular, we will be talking about fairness and interpretability.
Rather than putting a lot of focus on methodological approaches,
I will mostly focus on assumptions and how technical assumptions
may actually trigger some problems if we don't make them carefully.
So just as a bit of motivation, the context where I will be mostly discussing
is machine learning to inform what I call consequential decisions.
And I call them consequential because I'm talking about settings in which the algorithm
is supplementing or even replacing human supervision in decision making processes
that may have important consequences in people's life or the individuals that they decide about.
So, for example, here we have some examples on pre-trial bail.
Some of the tools that are being used in the US are actually informed by machine learning.
So many of you might be familiar by now with the Compass dataset.
We are also considering settings like credit approval or hiring processes
where machine learning is becoming more and more ubiquitous around it.
And the questions that we are going to be handling is actually,
or the big question actually is how do we make sure that these algorithms
and the decisions that they make are aligned with human goals and values.
And of course, when I talk about human goals and values,
we are different individuals in society.
We have a different role every time that we are embedded in these kind of decisions.
And if you think of the owner of the algorithm,
maybe this individual is interested in making the algorithm precise,
to have a good performance and therefore to maximize the profit of the company
Presenters
Zugänglich über
Offener Zugang
Dauer
01:31:45 Min
Aufnahmedatum
2021-02-12
Hochgeladen am
2021-02-12 17:47:32
Sprache
en-US
It is a great pleasure to announce Isabel Valera from Max Planck Institute (MPI) Saarbrücken as invited speaker at our lab!
Title: “Ethical ML: mind the assumptions”
Speaker: Prof. Dr. Isabel Valera (Saarland University & Max Planck Institute for Software Systems)
Website: https://ivaleram.github.io/
Abstract: As automated data analysis supplements and even replaces human supervision in consequential decision-making (e.g., pretrial bail and loan approval), there are growing concerns from civil organizations, governments, and researchers about potential unfairness and lack of transparency of these algorithmic systems. To address these concerns, the emerging field of ethical machine learning has focused on proposing definitions and mechanisms to ensure the fairness and explicability of the outcomes of these systems. However, as we will discuss in this work, existing solutions are still far from being perfect and encounter significant technical challenges. Specifically, I will show that, in order for ethical ML, it is essential to have a holistic view of the system, starting from the data collection process before training, all the way to the deployment of the system in the real-world. Wrong technical assumptions may indeed come at a high social cost.As an example, I will first focus on my recent work on both fair algorithmic decision-making, and algorithmic recourse. In particular, I will show that algorithms may indeed amply the existing unfairness level in the data, if their assumptions do not hold in practice. Then, I will focus on algorithmic recourse, which aims to guide individuals affected by an algorithmic decision system on how to achieve the desired outcome. In this context, I will discuss the inherent limitations of counterfactual explanations, and argue for a shift of paradigm from recourse via nearest counterfactual explanations to recourse through interventions, which directly accounts for the underlying causal structure in the data. Finally, we will then discuss how to achieve recourse in practice when only limited causal information is available.
Short Bio: Prof. Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany).
She is a fellow of the European Laboratory for Learning and Intelligent Systems ( ELLIS), where she is part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit.
Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. Prof. Valera obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK).
This video is released under CC BY 4.0. Please feel free to share and reuse.
For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.
Music Reference:
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)