10 - Recap Clip 6.1: Introduction: Rationality in Artificial Intelligence [ID:21855]
50 von 89 angezeigt

Next topic was we're going to try and situate all the things we're going to do in a paradigm,

which is intelligence or rational agents, which really gives us a unifying framework

for AI. And I tried to motivate this from this point of view that we can actually think

about thinking and acting humanly and thinking and acting rationally. Rationally is something

we can define, that's the good thing. And humanly is something that we can only observe.

So essentially what we're trying to do here is we're going to shift the kind of official

idea of what AI might be, namely acting and thinking humanly, to something where we actually

have a chance of defining it and operationalizing it. And we've tried to convince you that these

basically come out to the same thing. Rational behavior is actually what humans do quite

well, up to computational deficiencies or say our emotional system actually switching

what we perceive as rational. There's a couple of these four areas. Thinking humanly is what

cognitive science does, it's kind of the natural science of brains or the software of brains.

Thinking rationally is what we do this semester and acting rationally is actually a superset

of course and that's what we're going to do next semester. We talked about the Turing

test which is really the test of at least talking humanly and that's the only real accepted

test for AI. It's very difficult to define when we've reached AI. And the oldest test

is also the one that is the accepted test. Alan Turing, one of the most important, mostly

theoretical computer scientists, I think he thought of himself as a mathematician back

then, devised it. And really in the now 70 years, almost 70 years since Turing came up

with this idea of this test, every kind of objection that people have kind of put forward

why AI might not work in principle has already been anticipated in this Turing test idea.

Questions? Objections? And of course there's the Lerner Prize, an American rich guy said

I'll give 3N plus K dollars if somebody actually passes the Turing test. And they have a contest

every year and you can enter with a program. And you have kind of humans sitting there

and kind of judging how well this thing does. And it's a lot of fun but nobody's ever won

it. But there are pretty good programs. If you will, things like chat bots or so, which

are easy to do nowadays, are in a way partial answers to the Turing test. And in certain

very limited situations, people actually can't tell whether they're talking to a human or

not. A very early example of that was a program called PARI, which tries to simulate a paranoid

human being. That thing is so good that they're using it for training psychologists. So especially

since paranoia comes with a percentage, the diagnosis is you're 0.7 paranoid. I know nothing

about this. And you could basically tell the program PARI to simulate a 0.7 paranoid person.

And for bigger versions of PARI, like starting at maybe 0.5 or something like this, it's

indistinguishable to human. Which is actually quite nice. So they're using it to train medical

students in some universities so that they actually get a good feeling for 0.7 or 0.6

or something like that. There's also a very old program you may have heard of. Eliza,

who of you knows Eliza? Okay, there's not very many. Let me show you Eliza. Eliza is

one of the first AI programs developed by a philosopher, Joseph Weizenbaum, who wanted

to show the AI people, hopefully AI people, look what I can do. I can do AI. So it's trivial,

so you're wasting your time. And by now it's so common it's built into the Emacs editor.

I'm a psychotherapist, please describe your problems. And then you can talk away. And

then you get an answer. And it always turns the question around to you, which is a very

simple pattern based thing. And it works as long as I am serious. Right? If I say...

then things become problematic. That... those kind of things. The interesting thing is Weizenbaum

wrote this program to make fun of the AI people. But then, at the time, and that was in the

early 60s, I believe, at some point he told his secretary that he actually has copies

of all the Eliza conversations. And was surprised that his secretary actually started crying

and left the room. Why? Because she had been discussing very personal problems with Eliza.

Humans, and she was kind of basically accepted Eliza in a very limited Turing test. So this

is actually the setting of a Turing test. And if you decide Eliza helps you, that's

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:15:53 Min

Aufnahmedatum

2020-10-26

Hochgeladen am

2020-10-26 12:57:03

Sprache

en-US

Recap: Introduction: Rationality in Artificial Intelligence. Additionally, some interesting notions about Parry and Eliza).

Main video on the topic in chapter 6 clip 1.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen