The following content has been provided by the University of Erlangen-Nürnberg.
Welcome to AI in November, I guess. Last week I wasn't here and Jonas Betzendahl actually gave
the lecture. If you have any questions, I'll repeat the stuff there. To give you a grand
picture of things after the admin and initial blah blah, we started actually doing things.
We looked at prologue. Another thing that's going to be important is complexity analysis.
How long do our algorithms take in general? One of the reasons this is important is that in the
public you always hear these arguments that singularity is near because computing power
grows exponentially. That's right so far. But we will see that all AI algorithms also grow
exponentially, if not worse. The question of whether computing power grows exponentially
means anything still needs to be answered. For that we actually need an idea of how our
algorithms perform. In the last years that I've been giving this course, one of the things I
noticed was that AI students typically have difficulties when given an algorithm of seeing
what is the complexity of that algorithm. I decided I'll just go for a little recap of that
in the beginning. The first two slides actually are to convince you that complexity actually
matters depending on what complexity class we are in. It is possible to solve big problems or not.
If we're in somewhere in polynomial time algorithms then we can hope for solutions even for big
problems if you look at that. If you're in the exponential realm even for relatively small
problems, in this case I've used a hundred as a size just made up stuff, we are well in the range
of I don't care for the result at that time anymore. Complexity matters, it really matters
in the size of problems we can do. We want to keep an eye on the complexity of our algorithms.
Both the worst case complexity, how long might this thing take, which is relatively simple to
come to grips with, but also for the average case complexity. Very often we can identify
simpler cases where the complexity is better. I'm assuming that you've seen big O of stuff
before. We care about these kind of complexity classes. Quadratic is well tractable, well below
the growth of computing power. A polynomial algorithm still are and with those we kind of
feel comfortable. Exponential is always a warning sign. Of course most of the stuff we're going to
do just as a little spoiler is going to be exponential. I'm assuming you can do big O
calculations and the main idea is that these complexity classes are all contained in each
other. That actually has a very nice computational consequence. If you have an algorithm that has
parts that are big O of n cubed and you have other parts that are n squared, you can forget
about the n squared. N cubed, big O of n cubed plus big O of n squared is just big O of n cubed.
Of course if you have nested things then you have to multiply. What I'm assuming is that you
essentially can do that for algorithms. Since I'm not sure anybody has done that yet, I've written
it down. We will from time to time have algorithms of pseudo code. These are essentially all the
things you can do in programming, pseudo code. We have constants and variables. We have applications,
so function application. We have assignments, composition, meaning sequences of algorithms.
We have branching. We have looping. For all of those we have different ways of compounding
the complexities. I know this is very abstract and gives you a headache. What you really want
to do is to look at a couple of algorithms and go through this. It's very, very natural. I've
just written down what computer science common sense would dictate anyway. It's just
basically counting on steroids, counting instructions. So that you can actually play
with this, we've given you a homework example for that. We have a large class. At some point
we had beyond 170 stood on signups. If that persists, it is not clear that we will be able
to correct all the homework assignments. We'll try our best. I have five tutors who can do
a program we haven't been able to find anybody else. We'll do as much as we can and then just
randomly leave out stuff. We will, however, publish master solutions so that if for those
problems which you've solved and do not get feedback on, you can give your own feedback
by comparing to the master solution. Sorry about this. It's not ideal. I'm not going
to say, oh, half of you drop out. This is the best we can do. But you can help the situation
in next year by becoming an AITA in the next year. We want all of you, not all of you,
Presenters
Zugänglich über
Offener Zugang
Dauer
01:27:43 Min
Aufnahmedatum
2018-11-07
Hochgeladen am
2018-11-07 16:15:33
Sprache
en-US