17 - Artificial Intelligence II [ID:9280]
50 von 629 angezeigt

Re-anche project basically.

Hello again.

I think I can now use an on-screen pointer.

It took a while to get this here to talk to my new Mac.

We're doing machine learning at the moment.

The overall setting is that we have these agents that do what they do and we accompany them with a kind of a critic.

Yes.

I didn't have a lot of practice.

That kind of interprets some kind of a performance standard and that gives learning incentives to the learning element.

I should probably use this one.

This is too fast.

Okay.

Then we might have a problem generator and so on.

Really what we're interested in is how does that work?

We're in the phase where we're looking at general phenomena in the...

No, that was mine.

We're looking at the general phenomena, what is learning at all?

In particular, we're interested in... That doesn't work. Let's see whether this... In particular, we're interested in...

No fun.

Okay.

I did say full screen.

Yes.

Single page.

Yes.

Where is that?

There.

Ha.

Thank you very much.

I don't use preview at all.

This looks much better.

Still yellow.

Okay.

What we're interested in at the moment is something we call supervised learning, which is essentially learning by being taught.

You have a set of examples and this set of examples actually say, in this situation, you should do that.

Okay.

Very simply put, that's what we call inductive learning is we have a set of examples, which is a function from states to outcomes,

which is the gold truth, the gold standard, which we want to learn.

What we want to do is we want to find a hypothesis, a function that behaves similarly or ideally exactly like F on the examples we have already seen.

That has a good prediction quality for future unseen examples.

The elephant in the room here is that the hypothesis has to come from some kind of a hypothesis space.

This hypothesis space is something that in the background determines a lot of things.

We've looked at these examples of curve fitting where we've looked at different hypothesis spaces, linear polynomials, quadratic polynomials,

order four polynomials, order gazillion polynomials or something like that.

You can see that what the best hypotheses are depends on the hypothesis space.

Sometimes we have consistent hypotheses, which means they are okay on all of the examples.

Sometimes we have non-consistent or partially consistent hypotheses, all depending on what you're allowed to pick the hypothesis space.

Do you assume that the training set does not contain all liars?

Or does it really matter because we're trying to learn this training set and it contains all liars, we might learn something wrong.

Yes, right. If we have wrong examples, we have to deal with that.

We might really say we need to have outlier detect, if there can be outliers, we have to deal with outliers.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:23:25 Min

Aufnahmedatum

2018-06-13

Hochgeladen am

2018-06-14 09:01:42

Sprache

en-US

Dieser Kurs beschäftigt sich mit den Grundlagen der Künstlichen Intelligenz (KI), insbesondere mit Techniken des Schliessens unter Unsicherheit, des maschinellen Lernens und dem Sprachverstehen. 
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter. 

Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz

  • Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.

  • Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).

  • Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen