Right. And if you think about solving Sudoku, most people I've seen with Sudoku actually
solve Sudoku having a pen, where you say this here is either a 3 or a 2. What we had
last time, oh this is a different one, but typically you go to a cell and say it must
be a 2 or a 3 because all the others are actually taken already. And then you write down, well
this is a 2 or a 3. And then you do something interesting because then you actually from
this additional constraint, which was not in the original, because the original said
this cell could be a 1 or a 2 or a 3, we're up to 9. But you're saying it is a 2 or a
3. Then you say, well if this is a 2 or a 3, this must be a 4. Because that's the only
thing that's left over. And so on. You go on, I know all of these don't work, but I
don't care. I concentrate on other things. But that way, what you're doing is you're
tightening up the description. You're not actually looking really at partial solutions,
you're actually massaging the description so that it becomes easier to solve. And ideally,
you massage it so well, the description, that you can read off the solution. And that's
what we want to do now. We want to tighten up the description so that we can make it
very easy to solve. Ideally, tighten, tighten, tighten, tighten, so that we have no choice
whatsoever. That's typically not going to work. Because this tightening up is something
that needs information. Imagine you have no numbers in there. There's nothing you can
tighten up. So what you do is typically you make a kind of a hybrid search, where you
make random choices or informed choices. For instance, if you have the completely empty
Sudoku, you just put in a number. And then you can tighten up. And the polite word for
that is propagating information. You have the information and propagate it into other
cells tightening up the description. And you do that until you can't do that anymore.
And there are various ways of propagating this information. And then you're stuck with
propagation and then you make another random choice. And you only need to backtrack over
the choices because the tightening up is in a way, actually doesn't change the set of
solutions. It only changes the description. And if you think about it, that's really what
you do when you're doing Sudoku. You're not doing backtracking search. Well, you're also
doing backtracking search. But you're also propagating information. And that works well.
The important thing here I want you to remember, and that's the take-home message for today,
is that we have these two levels of search. We have the description level search, and
we have the search at the state space level.
And that is something we're going to see again.
And the more freedom we have in describing the states,
the more description level search will sound attractive.
And the less we can look into the search,
the more state space search works for you.
And next week we're going to start on logic.
We're only doing inference, which is description level
search.
We only have descriptions.
OK.
That's the main idea for today.
OK.
Let's do the math.
So we're going to, if we have two constraint networks that
share the same variables, then I'd say the v's here are equal.
That's what we have.
But we might have different domains,
and we might have different constraints.
Then we're going to call them equivalent if they
Presenters
Zugänglich über
Offener Zugang
Dauer
00:33:59 Min
Aufnahmedatum
2020-10-30
Hochgeladen am
2020-10-30 17:57:31
Sprache
en-US
Explanation of Inference with Equivalent Constraint Networks and Tightness.