Actually, we have one algorithm called the tree search algorithm and essentially you've
seen that before, at least I'm assuming.
So the idea is what you're doing, and I'm not going to go into this, I'm going to show
you examples because that makes it much more clear.
What you do is you start at your initial state and then you look at all the operators that
are applicable that bring you into a successor state.
You look at all the successor states of which there are one, two, three in Romania.
You compute the successor states, you choose a successor state from the states you've actually
looked at, computed, but not actually acted on.
We're going to call all of those things that you've identified, all the states you've identified
but not acted on, we're going to call that the fringe.
And from the fringe you choose one, in this case CBU, and then you do the same thing
again.
In CBU you compute the successors, here they are, which makes your fringe smaller because
CBU you already have acted on, but also bigger because these are new.
And then from the new fringe you select a different one and you start over.
Oh good, no, not good, we want to go to Bucharest, I'm sorry, I messed up.
I would have gone in a circle here, which is perfectly fine for a plan, it's not an
optimal plan though.
That's the tree search example.
And if you look at the algorithm, there it is, you are given a problem and a strategy,
the problem you need to compute the successor state, and the strategy is what allows you
given a fringe to select one.
And this will be the main parameter in which way you choose the next node to act on from
the fringe.
So given a strategy, so what do you do?
You loop, if there is no candidates for expansion, which means I've done everything, then I fail,
there is no solution, otherwise you choose a leaf node for expansion, that's according
to the strategy.
If the node is a goal state, yeah, then you just return the solution, if it's not, you
just repeat.
The thing I want you to understand is that we're not actually searching the space of
all nodes.
Remember the space of all vacuum cleaner nodes was this funny graph with eight states, and
the Romania problem you can think of as this map I showed you, which had, what was it,
25 states maybe, and maybe 50 actions or something like this.
What we're looking at here is actually a tree which is different from the node space.
For instance, this tree has Arad one, two, three, four times in it.
In this state space or state graph, you would have every state only once.
So it's important to realize that we're searching a space that is simpler than the graph, but
that might be bigger.
We're generating a data structure in memory which is labeled by states, each node is labeled
by a state, but the nodes are not identical to the states.
That allows us to define a notion of a path cost, because in a tree, say we have some
kind of an inner node or a leaf, the property of a tree that we're using is that for every
inner node or leaf of that tree, there's exactly one path from the root to that node.
If we have a path of actions, we can sum up the costs over that, and that gives us this
notion of a path cost.
In a graph, even in a simple graph like that, this node does not have a natural path cost.
You could reach it this way or you could reach it that way.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:14:59 Min
Aufnahmedatum
2020-10-27
Hochgeladen am
2020-10-27 12:37:20
Sprache
en-US
Explanation of Tree Search and search strategies.