42 - Beyond the Patterns - Nils Köbis: Ethical Behavior by Humans and Machines [ID:36153]
50 von 955 angezeigt

Welcome back, everybody, to another episode of Beyond the Patterns.

Today, I have the great pleasure to introduce Dr. Niels Kubis.

He is a Post-Doctoral researcher at the Max Planck Institute for Human Development, Center

for Humans and Machines.

He is a Behavioral scientist working on corruption,라고ainaethcis, and more recently artificial intelligence.

So he co-founded the interdisciplinary Corruption Research Network.

And together with Mathew Steventsen and Christopher Stork he hosts CICPAC – the global anti-corruption

podcasts

Previously he completed a postdoc at CREED, the department of Economics, University of

Amsterdam and a PhD in social psychology at the Free University of Amsterdam.

them. Today I have him here and he will give a presentation on machine behavior and how it

can be applied with respect to AI ethics, and I'm very glad that he's here. It's a very important

topic and you will see this will be really an awesome presentation going into a lot of examples

how machines can actually potentially behave, and how they affect behavior of humans in response.

So Niels, great to have you here. I'm very much looking forward to your talk.

and the stage is yours!

Thank you Andreas for the invitation and also the kind words. Very glad to be here, even though it's

only virtually. I'm very happy to share my recent research with you. I'll try my best to go

through it in a slow pace because I am aware that this is a rather diverse audience and I hope

I don't get too much into the specifics of one particular discipline. Because as Andreas mentioned,

I do believe quite strongly in interdisciplinary research and I this research very much combining

different approaches. And as mentioned the title is Ethical Behavior by Humans and Machines.

The main part of the talk today is actually focusing on the question of how intelligent

machines influence our moral decisions. There has been a lot of research illustrating that

intelligent algorithms, AI systems can produce unethical outcomes.

Just, you know, one famous example are gender and racial biases in various AI systems,

for example, facial recognition or jail and bail algorithm set up already used in the US.

But what I think has been less discussed in research is sort of how these AI systems,

when they are employed, can actually influence our ethical behavior.

It's sort of a second-order effect.

And the main part of the talk today will be based on a recent review paper

that I published together with Jean-Franc highway, the phone is in Toulouse

and yet I was the director of the center for humans and machines

in order to set the stage.

And like I said, in order to reflect that, I think this is a rather diverse

audience, I want to take a step back and sort of explain what the field of behavioral ethics

that I am doing most of my research in conceptualizes as unethical behavior. And it starts with

a very basic premise, I would say, namely, that throughout our daily lives, we often

face so-called ethical dilemma. And these are situations where breaking an ethical rule

often leads to profits. And adhering to these rules is sort of the expected behavior of

you. And you can actually also at times replace ethical rules with legal rules. So one example,

for example, is like, whether to cheat on your tax claims or to buy a ticket for public

transport, or maybe even to tell an inconvenient truth is oftentimes a situation where you

have to decide whether to break an ethical rule for private profit, oftentimes financial,

or to adhere to these ethical rules. I think one of the strengths of the field of behavioral

ethics is that it has developed several standardized paradigms that have helped to gain insights

into these ethical, how people deal with these ethical dilemmas. And it has been quite popular

across various disciplines. So behavioral economics, social psychology, management,

but also philosophy actually has used such tasks. I'm going to show you one of them,

which is also probably the most frequently used one. It's the so-called die rolling

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:32:51 Min

Aufnahmedatum

2021-09-22

Hochgeladen am

2021-09-22 15:46:05

Sprache

en-US

We have the great honor to welcome Dr. Nils Köbis to our lab for an invited presentation!

Abstract: Machines powered by Artificial Intelligence (AI) are now influencing the behavior of humans in ways that are both like and unlike the ways humans influence each other. In light of recent research showing that other humans can exert a strong corrupting influence on people’s ethical behavior, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we present new experimental evidence on how AI influences human ethical behavior. We propose a framework that highlights four main social roles through which both humans and machines can influence ethical behavior are (a) role model,(b) advisor,(c) partner, and (d) delegate. Based on these insights, we outline a research agenda that aims at providing more behavioral insights for better AI oversight.

Short Bio: Nils Köbis is a Post-Doctoral researcher at the Max Planck Institute for Human DevelopmentCenter for Humans and Machines, He is a behavioral scientist working on corruption, (un-)ethical behavior, social norms, and, more recently, artificial intelligence. Also, he co-founded the the Interdisciplinary Corruption Research Network and together with Matthew Stephenson and Christopher Starke hosts the KickBack – Global AntiCorruption Podcast. Previously he completed a Post-Doc CREED, Department of Economics, University of Amsterdam. and a Ph.D. in Social Psychology at the VU Free University Amsterdam.

Register for more upcoming talks here!

References

Köbis, Nils, Jean-François Bonnefon, and Iyad Rahwan. "Bad machines corrupt good morals." Nature Human Behaviour 5.6 (2021): 679-685.

Köbis, Nils, and Luca D. Mossink. "Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry." Computers in human behavior 114 (2021): 106553.

Köbis, Nils, Barbora Doležalová, and Ivan Soraperra. "Fooled Twice–People Cannot Detect Deepfakes But Think They Can." Available at SSRN 3832978 (2021).

This video is released under CC BY 4.0. Please feel free to share and reuse.

For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.

Music Reference: 
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)

Tags

beyond the patterns
Einbetten
Wordpress FAU Plugin
iFrame
Teilen