42 - Beyond the Patterns - Nils Köbis: Ethical Behavior by Humans and Machines/ClipID:36153 previous clip

Recording date 2021-09-22

Via

Free

Language

English

Organisational Unit

Friedrich-Alexander-Universität Erlangen-Nürnberg

Producer

Friedrich-Alexander-Universität Erlangen-Nürnberg

We have the great honor to welcome Dr. Nils Köbis to our lab for an invited presentation!

Abstract: Machines powered by Artificial Intelligence (AI) are now influencing the behavior of humans in ways that are both like and unlike the ways humans influence each other. In light of recent research showing that other humans can exert a strong corrupting influence on people’s ethical behavior, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we present new experimental evidence on how AI influences human ethical behavior. We propose a framework that highlights four main social roles through which both humans and machines can influence ethical behavior are (a) role model,(b) advisor,(c) partner, and (d) delegate. Based on these insights, we outline a research agenda that aims at providing more behavioral insights for better AI oversight.

Short Bio: Nils Köbis is a Post-Doctoral researcher at the Max Planck Institute for Human DevelopmentCenter for Humans and Machines, He is a behavioral scientist working on corruption, (un-)ethical behavior, social norms, and, more recently, artificial intelligence. Also, he co-founded the the Interdisciplinary Corruption Research Network and together with Matthew Stephenson and Christopher Starke hosts the KickBack – Global AntiCorruption Podcast. Previously he completed a Post-Doc CREED, Department of Economics, University of Amsterdam. and a Ph.D. in Social Psychology at the VU Free University Amsterdam.

Register for more upcoming talks here!

References

Köbis, Nils, Jean-François Bonnefon, and Iyad Rahwan. "Bad machines corrupt good morals." Nature Human Behaviour 5.6 (2021): 679-685.

Köbis, Nils, and Luca D. Mossink. "Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry." Computers in human behavior 114 (2021): 106553.

Köbis, Nils, Barbora Doležalová, and Ivan Soraperra. "Fooled Twice–People Cannot Detect Deepfakes But Think They Can." Available at SSRN 3832978 (2021).

This video is released under CC BY 4.0. Please feel free to share and reuse.

For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.

Music Reference: 
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)

More clips in this category "Friedrich-Alexander-Universität Erlangen-Nürnberg"

2021-10-15
IdM-login
protected  
2021-10-21
Free
public  
2021-10-21
IdM-login
protected