Just to remind ourselves, we are basically talking right now about agent programs for
goal-based agents.
So we use this example of traveling in Romania as one of the prime examples for this.
We have an agent.
That agent is in some kind of a state and we've made the states to be the cities in
Romania and the actions to be driving or going from one city to the other.
And the agent has an initial state which we assume to be Arad, just for the name being
one of the names that I can actually pronounce.
And we have the goal to go to Bucharest.
We framed this as a tree search problem and we've looked at various to actually search
strategies.
Remember, we have this one algorithm that we're always looking at, which is the tree.
That's not the one I'm looking for.
There it is.
We have this generic tree search algorithm which basically loops over.
We expand a node, we choose a next node according to the strategy.
We have a fringe of unexpanded nodes.
We pick one from the fringe according to the strategy and then we expand that and then
we do that until we hit a goal node.
And depending on the strategy, namely that picks the next node from the fringe, we get
different search behaviors and the way we want to express or look at them, evaluate
them is basically we're going to look at completeness.
Will we find all solutions?
The time complexity, the space complexity and optimality.
Whether we find the best solution first because if we find the best solution first, then when
we have a solution we can stop.
Very useful that.
Otherwise, we would have to basically find all the solutions of which there might be
infinitely many and then compare them.
And the inputs to these evaluations is the branching factor, how many outgoing arcs do
we have for, maximally have for the states, the minimal graph depth of the solution in
the search tree and the maximum depth of the tree.
Any of those might be infinite.
Okay?
So, we've looked at the first two of, what is it, five of these strategies.
The first one is breadth-first search, which essentially the idea is we go level by level
by level by level.
And the second one, which is uniform cost search, is instead of going level by level
by level by tree depth, we're going to use these iso-cost lines.
The lines of constant cost.
And they're basically the same thing, only the second one is optimal.
The first one is only optimal if the step cost is uniform.
And both of them are essentially complete under certain reasonable constraints and exponentially
in both time and space.
Exponential in both time and space means completely utterly intractable.
Okay?
And the metaphor of this I've used is not this is the realistic picture, but these are
the realistic pictures of trees.
That's how trees grow in nature.
Okay?
Presenters
Zugänglich über
Offener Zugang
Dauer
01:24:28 Min
Aufnahmedatum
2024-11-12
Hochgeladen am
2024-11-14 20:39:11
Sprache
en-US