9 - Recap Clip 5.2: Complexity Analysis in AI (Part 2) [ID:21853]
50 von 80 angezeigt

And so what I'm assuming is that you essentially can do that for algorithms.

And since I'm not sure anybody has done that yet, I've written it down.

And if we will from time to time have algorithms and pseudocode, these are essentially all the

things you can do in programming pseudocode, right? We have constants and variables,

we have applications, so function application, we have assignments, composition, meaning sequences,

of algorithms, we have branching, we have looping. And for all of those, we have different ways of

compounding the complexities. So I know this is very abstract and gives you a headache. What you

really want to do is to look at a couple of algorithms and go through this. It's very,

very natural. I've just written down what computer science common sense would dictate anyway.

It's just basically counting on steroids, counting instructions. And so that you can

actually play with this. We've given you a homework example for that. We have a large

class. At some point, we had beyond 170 stood on signups. If that persists, it is not clear that

we will be able to correct all the homework assignments. We'll try our best. I have five

tutors who can do a program. We haven't been able to find anybody else. So we'll do as much as we

can and then just randomly leave out stuff. We will, however, publish master solutions so that

if for those problems which you've solved and do not get feedback on, you can give your own

feedback by comparing to the master solution. Okay, so this gives you the full story of complexity

of algorithms. Of course, you still have to practice actually going through an algorithm

and finding out what time and space complexity is. Please look at this critically. I've written it up

very early one morning and there must be errors in there. Right, the next thing, so here up to this,

we've been talking about complexities of actual algorithms. The next thing was to talk about

complexities of problem classes. There's a difference there. The complexity of a problem

class is the complexity of the best algorithm you can find. There are problem classes that have

different complexities. Multiplication is one. Okay, just multiplying numbers. Well,

let's do the thing we did in elementary school. Seven times two is 14. Seven times one is seven

is eight and so on. Who can tell me the complexity? Yes, n cubed. I see a square here. If this grows

and that grows, then we have a square. Okay, good. Then you were completely right. So, multiplication

is, this multiplication algorithm, elementary school algorithm, is actually quadratic. Turns out,

we can do better. A guy called Karatsuba made what is now known as Karatsuba multiplication,

which is O of n log n. Nicer. Turns out, we can do even better. There is Schoenhagen-Strasse

multiplication, which is O of n. What does that mean? It means that for some kind of a big factor

and eventually, we're going to be linear. The factor is pretty awful. What you're doing,

essentially, is if you have a number like this, you interpret it as a polynomial in base 10. If

you have a polynomial in base 10, you can put fast Fourier transform on it and that you can do

in linear time. Then you get the Fourier transformed two polynomials that are Fourier transformed. To

multiply them, you can actually add the Fourier transforms that you can do in linear time and

then you fast Fourier, untransform the whole thing, which is, fast Fourier transform is a huge

algorithm. Then you get O of n. It only pays off if you want to multiply 10,000 digit numbers

upwards. That's not practical. This is what all the high-speed implementations actually use.

Just to make the point, multiplication as a problem class is linear. I don't think you can

get any better than linear in this case because you have to read. Anything below linear can't

actually read its arguments. It's kind of not so nice for multiplication because we actually

care about the product. The problem class is linear because the best algorithm is linear,

whereas the algorithm you learned in elementary school is quadratic. That actually is important.

It's important to know the classes that you're getting involved with. My example was, if you

try to get an efficient algorithm, and efficient typically means something like at least polynomial

or at worst polynomial. Some people think linear is the only efficient thing. Some people still

believe in quadratic in AI. Efficient or tractable, as we say, is anything that's below exponential.

If you know that the problem class of a problem where you want to find an algorithm is NP-hard,

then you know that you're not going to find any efficient algorithms for the general case.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:13:23 Min

Aufnahmedatum

2020-10-26

Hochgeladen am

2020-10-26 12:26:55

Sprache

en-US

Recap: Complexity Analysis in AI (Part 2)

Main video on the topic in chapter 5 clip 2.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen