8 - Constraint Propagation with Local Search [ID:22355]
48 von 48 angezeigt

So I have five minutes left. That should be enough for this section because it's basically

just a small wrap up. We're doing this whole thing with tree search basically as a starting

algorithm, namely with depth-first search. Now you can of course also do local search

because keep in mind with this whole constraint satisfaction problem, if you consider it a

search tree, the path to the solution ultimately doesn't matter. What matters is which leaf

you pick out as a solution. So you can beautifully use local search in theory. So what does that

mean? Well, all of these algorithms like hill climbing, simulated annealing, you've been

talking about them earlier in the semester. You can use those for solving constraint satisfaction

problems by basically using a random assignment of all of your variables and then just modifying

them until it happens to fit. That's one way to do things. You can use several heuristics

to make the whole thing rather, well, more efficient basically. So the idea is you start

with an assignment and then reassign with local search the different variables until

they fit. Just as an example of how this would look like, imagine you have this four queens

problem, you have these four queens and you basically assign random positions on the board

for them. This will give you a solution to the problem. So now you just basically random

or according to one of those local search algorithms, use those to reassign your queens

to different fields. So maybe the first thing you might notice is this queen, what's the

word, breaks three of those constraints. So you just move it somewhere where it doesn't

like up here. Then you only have these three queens left which still break two constraints.

So you might want to pick this one which breaks two of them and move it somewhere where it

doesn't and hurray, by just moving the second queen down here, you find a solution to the

problem. That's one way to do this. The idea here is that you use this heuristic where

you try to choose a value for every variable that has the minimal conflict with the remaining

variables, i.e. that breaks the fewest constraints. So in this case, pick the one which in this

case has the most constraints and move it somewhere where it has as few constraints

broken as possible, i.e. in these two cases it's actually one. I move it up here and there

are no constraints broken. I move this one down here and there are no constraints broken.

That's one way to do this. This is somewhat interesting in the following aspect when it

comes to complexity. This appears to work rather well. It can solve the n queens problem

in almost constant time which is basically insanely good for arbitrary n's except for

and this is where things get interesting. There's this small margin of values where

suddenly local search with min conflicts heuristics seems to explode with respect to complexity

and that happens to be this point where it explodes happens to be in the narrow range

of the ratio between these two numbers. So if I have many constraints and few variables

that's fine. If I have many variables and few constraints that's fine but somewhere

where this ratio happens to hit a specific value it suddenly explodes in complexity.

I'm not sure if I can argue why that is the case. I'm pretty sure I couldn't at least

not without thinking about it but yeah I find that to be weird and hence interesting. By

the way there are similar. We're going to start with propositional logic tomorrow and

the interesting thing about propositional logic is that it's basically one of those

classical problems that are representative of many others. In particular this whole notion

of NP completeness and so on is basically defined by finding assignments for propositional

logical variables for propositional logical formulas and they happen to be in a specific

sense to be equivalent with constraint satisfaction problems which means in set solvers which

is like one of those big topics of computer science especially because of this whole NP

completeness stuff and you get the same result. There's a certain ratio where certain formulas

basically in all set solvers suddenly start to explode when it comes to computational

time to solve them.

Teil eines Kapitels:
Constraint Propagation

Zugänglich über

Offener Zugang

Dauer

00:05:39 Min

Aufnahmedatum

2020-10-31

Hochgeladen am

2020-10-31 10:47:38

Sprache

en-US

Iterative algorithms for CSPs and their performance.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen