5 - Artificial Intelligence I [ID:9644]
50 von 840 angezeigt

The following content has been provided by the University of Erlangen-Nürnberg.

Welcome to Artificial Intelligence.

First observation of the day, I am not Professor Kohlhase, you've got that right.

He's away on something today and I will be holding the lecture in his stead.

Otherwise it will mostly be the same.

If you don't understand something or if there's some error on the slides

or if you are unclear about a concept or something, please feel free to interrupt.

I know I'm the new guy but you're still students and I still want to teach you stuff

so please do interrupt me if something's unclear.

Bonus points for anyone, not actual bonus points, but feel-good bonus points

for anyone who finds errors on the slides.

I looked over them yesterday and I found at least four.

So let me know.

We've also got this so everybody on the recording can hear you too.

You know how this works by now.

Wonderful.

The first chapter we're talking about today is complexity analysis in artificial intelligence.

Just to gauge the room, who of you has dealt with some sort of complexity analysis

in their studies before today?

That's like a solid two-thirds.

For you this will be a recap probably.

For the rest of you this is entirely new stuff.

So if you have an algorithm, you usually have something associated with the algorithm

that is called the complexity of the algorithm.

And we need to deal with this for several reasons.

For example, we still live in the age of Murphy's law.

That means computing power per dollar still usually doubles every two years-ish.

And that's often cited as the reason why eventually,

if this continues with the exponential curve,

then at some point we will have artificial general intelligence

because there's just so much computing power available.

That's a bit fallacious under some circumstances, I'd say.

For example, it presupposes that our algorithms are roughly efficient,

that we have an algorithm that doesn't grow even faster than the computing power per dollar, per CPU, whatever have you.

So we need to find some way of coping with how complex an algorithm is.

And that's usually a measure of the runtime of an algorithm in relation to the input size.

So if I give you a problem of size n, whatever that size may be,

it might be the number of digits in an integer, in a number.

It may be the length of a list, it might be the size of a database.

For some input to your algorithm, you have some sort of time the algorithm takes to finish whatever it is it's doing.

And then we measure the time it took related to the size of the input in some sort of complexity measure.

Here's three examples.

We have a linear algorithm that takes 100 microseconds times n,

where n is any size of the input, length of a list, number of digits in a number, and whatever.

Then we have a quadratic algorithm that takes 7 n squared microseconds.

And we have an exponential algorithm that takes 2 to the n microseconds.

And up until here, they are basically the same in the sense that you,

executing the algorithm from your terminal, would probably not notice the difference.

Who here would think that they have the capability to tell between 175 microseconds and 1 millisecond?

You might, but probably not.

Teil einer Videoserie :

Presenters

Jonas Betzendahl Jonas Betzendahl

Zugänglich über

Offener Zugang

Dauer

01:25:26 Min

Aufnahmedatum

2018-10-31

Hochgeladen am

2018-11-05 11:56:16

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen