So as an orientation, we're in the midst of looking at Bayesian networks. Bayesian networks
as a world model that allows probabilistic inference. So the idea is that an agent has
to model the world somehow, and in an uncertain world we have to do something about the fact
that we can't see everything, that our actions might be unsure to succeed, that we have unreliable
sensors and all of those kind of things. In all of those cases, we have to deal with possible
worlds and our estimations of likelihood of those worlds and still make good decisions.
I think we're going to look into decision theory next. So far, we've only done modeling.
Setting up models, and in particular, yesterday we talked about constructing Bayesian networks
rationally, and then computing probabilities based on that. Just basically, given some
evidence, how probable is it that it's going to rain tomorrow or something like this? Well,
actually, we haven't done time yet. That'll come as well. What is the probability of that
is raining outside right now? We had some evidence walking here. It didn't, and there
weren't any clouds, so those kind of things. Bayesian networks is our tool of choice because
it gives us a good way of representing and computing with conditional independence. Conditional
independence as our prime lifesaver in terms of computational complexity. Of course, computational
complexity has its cost for an agent. We have this basic Bayesian network construction algorithm
that just basically said, take your variables, order them in some way, and then just compute
the conditional probability tables. From the conditional probability tables, you can see
what the dependencies are by the test of just dropping a variable from the givens and see
whether something changes, and then we have a model. If there is a dependency, we just
add it to the graph, add the CPT to the graph, and then we end up with something. Depending
on the order we looked at the example, we get less pretty graphs where pretty, less
pretty means more arrows than in other ways, and whenever you think it gets bad, it can
even get worse if you have a wrong order. The idea is, yes?
Yes. No. I'll try. Yes.
No. Down there, here alarm is before burglary and earthquake, which is fine because that
makes the second half of the list kind of cause-ordered or causally ordered, and to
mess things up, I've made a diagnostic back there as well.
No. It depends on the others as well. You can see there is a transitive link here.
There is no transitive. It really depends on the givens. What you really want to see
is the conditional independence of the non-descendants given the parents.
Yes. Yes.
Exactly. Come on. Exactly.
This here really is the underlying fault in these lists here. That's what I want to do
here is that we have always had this causal and diagnostic directions, and if you make
a diagnostic all the way through, bad things are going to happen.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:05:52 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 10:36:40
Sprache
en-US
Recap: Constructing Bayesian Networks (Part 1)
Main video on the topic in chapter 4 clip 4.