We're talking about constraint satisfaction problems.
And the kind of basic innovation of that topic is that we're moving to factored states from
black box states.
It gives us multiple advantages.
One of them is that we can look into the states and get by with much smaller representations
and in particular much fewer actions.
Looking into the states is particularly good if you want to have generally applicable heuristics.
And indeed we've managed to find three heuristics that together give us a huge speed up of the search.
We can go from something like 25 queens to a thousand queens.
Where of course you have to realize is that the search spaces also grow exponentially in the number of queens.
So it's not just that we have a factor of 40 or something like this, but 2 to the 40 or 8 to the 40 or something like this.
And that's basically just by going to factored representations.
Now, it's very important to keep in mind that this doesn't always work well.
The slogan here is we're taking advantage of structure in the domain.
Some domains are very, very unstructured.
And in these cases, in the worst case, factored representations buy us nothing.
So if there's structure, we as the agent designers want to find the structure, natural intelligence at work,
and then have a way of encoding that structure.
So in a way, the thing that we've kind of learned in the last week or so is that if we detect structure,
we can now have a standard way of exploiting it.
And you remember factored representations were kind of in the middle between atomic representations, no structure,
and structured representations where we have a lot of flexibility.
The thing to realize with factored representation is at design time,
we have to commit on the set of variables or on the set of attributes that correspond to them.
And we might not know what's going to come up, and that's when we need even better representations.
So that was kind of the first thing.
We know how to do search-based stuff with heuristic informed search-based problem solving on factored representations.
The next thing, which is going to add another couple of orders of magnitude of efficiency,
is what we call constraint propagation.
That has another base innovation here, which is instead of doing domain-level search,
we do description-level search.
Rather than having a way of actually executing search, offline search that is,
we give ourselves a language for describing these problems.
In our case that's very easy, right?
The set of variables, the set of domains, the set of constraints.
And then we want to add constraints to existing problems, making different descriptions.
The crucial idea is of the morally equivalent, the same constraint problem.
You are not changing the underlying problems, which we can basically specify by not doing the set of solutions.
And so we make things easier by adding constraints.
That's the basic idea.
We've seen it in this example where we can add a constraint that Western Australia and Queensland have to have the same color,
because we've used up the other two colors in the middle.
And there are various ways of doing all these kind of things.
And we've looked at that we as humans, well trained in detecting structures and making use of it,
we do constraint propagation all the time.
Half of the stuff that I write down by hand is actually little constraints.
Everything else that's kind of more discursive I directly type into the computer,
but things like back of the envelope, that's kind of constraint propagation all the time.
And then when you have the solution, you can throw away the envelope.
That's really what's at play here.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:32:50 Min
Aufnahmedatum
2024-11-28
Hochgeladen am
2024-11-29 21:59:38
Sprache
en-US