10 - Lecture 9, 14 Nov 2022 [ID:46115]
50 von 1129 angezeigt

So, I hope so far everything was kind of clear because in the next two sessions, let's see

how far we get today, but it's becoming a little bit more fuzzy, a bit more hard to

do, talking about how we actually talk about algorithms, how to understand their efficiency,

how we can, I wouldn't call it measure, but estimate the run times and what kind of considerations

you should always have in mind when developing algorithms.

Let me just check if I am recording.

Yeah, everybody's recording.

Just to remind you.

Yeah, the transport is running.

Very good.

Okay, so I hope that the kind of exercises also going on should be relatively simple.

And part of that is kind of a unique example of what will be part of an example which is

very difficult kind of package and exercise because it is, it has more to do with understanding

that the practical skins and your tradesmanship.

Okay, so what we're talking today about is how we can measure the order of growth of

algorithms.

So how do they scale with data?

We'll introduce something that's called in computer science, big O notation.

Looks very mathematical.

It is not.

It's more like a kind of categorization of algorithms.

And from then on, we go by example and look into particular cases that we can try to categorize

in any of these big O notation categories.

So this is not, again, as I said, again, this is not an exact science, but it's rather about

bounds and we want to have bounds as close as possible to what the actual runtime is

if say our data goes to infinity.

Things like that.

You know, all the calculus tricks like if something goes to infinity, we can drop all

of the constants.

That's also what we're going to do with these considerations.

Now, of course, you could say by doing it doesn't matter.

Computers are getting faster.

If you write an inefficient algorithm that takes five minutes for doing something, let's

just wait two years.

Computers are always getting better and faster.

So we just need to wait and then run it.

It's very much very often not about computers.

Of course, they scaled whatever polynomial from couple of transistors to whatever billion

transistors we have now as a view.

So yes, we are a little bit faster.

But I mean, for example, landing on the moon, I would still prefer the Apollo computer compared

to my phone computer simply because of the redundancy that was built in.

And all the processes are actually not that slow for the type of algorithms we are talking

here about.

So it's not like orders of magnitude faster.

But so the growth is quasi-linear in computer power.

But what we observe is that data usually grows at a much bigger scale, the polynomial very

much faster than our computers get faster.

And there's an example.

Data can really be very, very large.

Zugänglich über

Offener Zugang

Dauer

01:20:25 Min

Aufnahmedatum

2022-12-12

Hochgeladen am

2022-12-12 19:56:03

Sprache

en-US

algorithm complexity, big O notation

Tags

Python programming
Einbetten
Wordpress FAU Plugin
iFrame
Teilen