12 - Artificial Intelligence II [ID:9165]
50 von 537 angezeigt

Hello everybody

I hope all of you are sober again

and we can do some Temporal Probability models

Remember that we started looking at modeling with time involved.

Everything essentially we had done before that kind of assumed a static world essentially.

Realistic for some things like diagnosing your automobile or something like that.

There's not going to the world or the state of the automobile is not actually going to change very

much during the time of deliberation. But very often that's not the case and that's what we want

to look at now. And so for that we need to have some kind of a notion of a time and to keep things

maximally general in principle we're just going to say we have a time structure which is essentially

a partially ordered set. Most people think of time as being linear here. In general you don't need

to do that. There's notions of a branching future which means and there are many possible futures or

even a branching past. The only thing that you really don't want in time structure is loops and

so on. So you don't really believe in Groundhog Day or something like this. So we do want the

time to kind of be progressing with a partial order. So what we're going to do is we're going to do

exactly the same as we did before except that we're now going to index all the random variables

we're talking about with time. Simple thing except of course that we're not limiting the size of this

set. In particular what we're going to do is we're going to use a linearly ordered time namely the

natural numbers with the usual lesser equal ordering. Think of some kind of a clock that

goes ticking along and every time a tick we have a new situation. Where the kind of length of

clock ticks really depends on the application. Some worlds we care about change daily, some of

them annually, some of them in in microseconds. So you have to kind of put that into your modeling.

But essentially we have linear discrete time. And of course we want to make our life simple and one

of the things that makes life simple is if we bound the number of influences from past times.

Okay and so we have a very natural way of thinking about it, about these things and we say that we

usually want to have something like a first over Markov property which limits the influences from

the past to one. Which is usually though not necessarily one time back, one tick, clock tick

back. You can have higher order Markov properties and where you limit for instance here incoming

influences to two which are usually better models. But our algorithms of course get more complex.

Have more complex run times. So we are mostly going to only use first order Markov properties.

And that was the one thing and our example was this umbrella example where we had a hidden set

of variables which is whether it rains today and a part of the observation and we can make

observations whether there's an umbrella that the director brings. Very importantly these are

unobservable to this prison guard. And if we want to do it in a first order Markov way this is the

very natural topology we're getting. It's not totally accurate but it's a nice example.

It's not totally unaccurate. So if you think of this here as a Bayesian network then of course

it's infinite. We can make it finite by chopping off everything saying well lifetime of the guard

and the... But that's not the case. Even if we chop it, if we make it finite by brute force then

it's still going to be kind of practically infinite. Say the guard works there for 40 years in his life

then we have something like what is it 10,000... No, 1200 days. No, 10,000 days, 12,000 days.

Something like this. So we have to do things and what we're doing is essentially something

that's already apparent here. We can essentially time slice and get something extremely simple

here. And the idea here is to make things simple by saying that all the time slices are essentially

equal and that the transition probabilities are always equal. And that's something I would like

you to think a little bit more about. We have two kinds of things we can put at the arrow. The first

thing here is what is the transition probabilities between the events of raining today and raining

yesterday. And even though in a normal Bayesian network those could all be different we're going

to say this is stationary if and only if these transitions are always the same. And that's really

if you think about the agent this is the model about how the world evolves. That's the transition

model. Something we've seen before. Whereas we can also look at what the time slices look like

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:26:11 Min

Aufnahmedatum

2018-05-23

Hochgeladen am

2018-05-23 21:40:23

Sprache

en-US

Dieser Kurs beschäftigt sich mit den Grundlagen der Künstlichen Intelligenz (KI), insbesondere mit Techniken des Schliessens unter Unsicherheit, des maschinellen Lernens und dem Sprachverstehen. 
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter. 

Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz

  • Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.

  • Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).

  • Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen