6 - Artificial Intelligence II [ID:9064]
50 von 777 angezeigt

Good.

Now I think we are at the right place.

We were learning probability theory with an aim towards Bayesian networks.

Bayesian networks as essentially a way and a very well working way to model the world

and the interdependencies of probabilities in that particular world.

Yes.

It is sliding through.

This might be the...

Let me try one more thing.

Okay, so it must be full screen.

Thank you.

Otherwise, you don't need me, right?

Let's see.

Please alert me when there's a problem.

We're going towards Bayesian networks as a way of modeling the world, which agents could

use and should use.

One of the ingredients, except for the ones we did last week, which was normalization

and marginalization, essentially, and the chain rule is the use of Bayes' rule, which,

as we saw yesterday, basically gives me a way of...

No.

Wait.

How do I do this?

Yes.

Yes, I can't turn...

Okay.

Basically, it gives me a way to switch the direction of conditional probabilities if

I know the priors involved.

We can go from the diagnostic direction to the causal direction and the other way around.

That often has advantages because typically, as we saw, the causal direction of this is

stable because it really talks about how the world works, whereas we very often think of

an agent trying to find out things about the world, want the diagnostic direction, and

want to use the diagnostic direction.

In all of those cases, Bayes' rule starts helping us.

We did a couple of extended examples.

We had this meningitis example where you can use Bayes' rule to get the probabilities of

somebody being ill with meningitis.

We're using the fact that even in epidemic situations, the causal direction of the relation

between meningitis and the stiff neck actually is stable.

Of course, we talked about dogs.

Basically, the next thing was we generalized the useful independence to something which

is much more prevalent in nature or the things we want to model, which is a weaker notion,

which is conditional independence.

Here's the notion.

We have two sets of things.

Just think one random variable, a random variable z1, another random variable z2, and a third

one z.

We just basically can do independence-like reasoning only by adding this conditional

dependence on given z that doesn't destroy the independence property.

z1 and z2 are conditionally independent given z if I have this multiplication property,

both times given z.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:18:04 Min

Aufnahmedatum

2018-04-26

Hochgeladen am

2018-04-30 16:10:25

Sprache

en-US

Dieser Kurs beschäftigt sich mit den Grundlagen der Künstlichen Intelligenz (KI), insbesondere mit Techniken des Schliessens unter Unsicherheit, des maschinellen Lernens und dem Sprachverstehen. 
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter. 

Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz

  • Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.

  • Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).

  • Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen