And the question is, how do we get out of this?
And the answer was, essentially, a star search,
where we just basically add an annoyance term to the heuristic.
And under certain circumstances, which we've explored,
that actually works.
And the circumstances are very easy.
We just need an admissible heuristic one that is actually less than the goal distance function.
Generally, we've looked at dominance of heuristics, and the bigger heuristic is the better one,
unless it overestimates. When it overestimates, we lose theoretical
properties like optimality in A star, but the nearer we get to the goal distance
function the better our heuristic is. And so we talked about admissibility of
heuristics which is a difficult thing to prove and we looked at consistency as a
kind of sufficient condition. Most of the heuristics you're ever going to deal
with are actually consistent. So since we proved, actually you may have proved by
looking through this thing here, we didn't go through it in class, that
admissibility is a consequence of consistency, you can actually check for
consistency of your heuristic, which is something that actually allows us to get
admissibility without knowing the goal distance function. Some cases, like
straight-line distance, it's very easy to convince yourselves of admissibility
because that's a consequence of three-space, which we are very
familiar with, and in some cases where we have a multi-dimensional, funnily shaped
search space, that is not so easy to determine and then consistency helps.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:03:06 Min
Aufnahmedatum
2020-10-28
Hochgeladen am
2020-10-28 09:46:52
Sprache
en-US
Recap: Heuristics and their Properties
Main video on the topic in chapter 7 clip 7.