6 - AI Topics Covered [ID:21719]
50 von 102 angezeigt

Okay, good, we've looked at that. So what are we going to do this semester? We're basically

going to start small. We've already covered artificial intelligence, right? No, come on.

There. We're going to talk about intelligent agents. AI is a huge vegetable garden that

has all these little plots and patches and something different grows in every one of them,

right? There's a couple of vegetable patches here and there's a couple of vegetable patches

on the next slide. So you basically want to have kind of a uniform metaphor. So you don't

have to think about carrots and potatoes and squash and all of those kind of things, but

you can think about vegetables. So what we're going to do is we're going to look at this

kind of unifying metaphors of intelligent agents. And we're going to use Prolog. Who

of you has experienced programming in Prolog? One, a half, or a third maybe. Okay. You're

going to hate it. It's one of the original AI languages developed for AI because nothing

else helped. No, that's not true. They also developed a language called Lisp, which is

the kind of original functional programming language. If you know, many of you have seen

Scala, right? Scala is kind of the grand-granddaughter of Lisp. One of the best programming languages

we have. Logic programming was kind of the big hype of the 80s, where actually the Japanese

had the fifth generation project. I might be wrong on the five there. Fourth generation

maybe. Where they said, well, we're going to conquer AI by building Prolog machines

and putting Prolog into silicon. And everybody was very scared, which was really good because

we could write grand proposals and say, well, if you don't give us the money, the Japanese

are going to overtake us. Okay. It's always very good to have some kind of an enemy far

off. That didn't quite pan out, so we're not actually using Prolog very much at the

moment. But it's an influential paradigm, which you haven't seen yet, and so we're going

to use it. And it's wonderful because you can do almost all of symbolic AI in a couple

of lines. So think of logic as a gem in our national heritage, and it's good for you to

know. Okay. And after AI won, you can safely forget it. Except, of course, that it changed

your brain forever, which is a good thing. Okay. So we're going to start, actually those

two are going to be swapped. And really the only thing you have to learn for Prolog is

recursive programming. If you learned that for Scala already, wonderful. You're not going

to have any problems except for the syntax. If you haven't learned it, now's your chance.

Nobody has to know recursive programming, I believe. Good. The next big block is something

we call general problem solving. It's a class of algorithms, some of them you already know,

which are based on some kind of search. Search in a state space. And they're so general,

you can formulate almost all problems such that they become search problems, and you

can apply off the shelf algorithms to them. And they're the reason why in the 50s, the

people advocating AI said, well, just give us a couple of millions and then we come back

in five years and we've solved AI. What they didn't quite understand was that all of the

algorithms we know are exponential, which means they behave really badly. No matter

how much computing power you throw at them, they can digest that and not actually do much

better. That's kind of what exponential means. That's not something you do with computing

power. You need a way of telling these algorithms kind of an intuition where they should go.

We're going to kind of establish the general framework. We're going to look at things like

depth-first search, breadth-first search, uniform cost search, A star search, and some

of those you've already seen, but I want to kind of go over them because we want to have

a common basis. Then we're going to use search-like algorithms in game playing. Look at AlphaGo,

how that works, how we can add senses of smell there. And then look at constraint solving

problems where we actually, instead of having a kind of a dumb state space to search in,

we have a little bit more clever state space and that opens new avenues here. Generally

in AI, there's always a tension between knowing more about the world, which makes the state

space even more terrible than it was already, and using the knowledge, the additional knowledge

about the world and being more efficient in this even more terrible space, the state space.

Teil eines Kapitels:
Artificial Intelligence – Who?, What?, When?, Where?, and Why?

Zugänglich über

Offener Zugang

Dauer

00:14:51 Min

Aufnahmedatum

2020-10-23

Hochgeladen am

2020-10-23 14:06:57

Sprache

en-US

Detailed overview over the topics of AI-1 and AI-2.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen