Great. So what's the whole point? The whole point is basically that we can do
inference and come up with a smarter network to solve by enforcing R
consistency. Where enforcing R consistency really just means we remove
variables, we remove values from our variable domains. And we do that until
the whole thing is R consistent. Okay, now let's assume that we have some
algorithm that does that, let's call it AC. And then of course AC is sound.
If we enforce R consistency we know we always only ever get rid of values in
our domain that don't lead to a proper solution anyway. So we know we don't ever
rule out any viable solutions if we go to a R consistent network. Right, and the
other smart thing is basically what AC does is forward checking just smarter. So
we also know it subsumes forward checking. Remember with forward
checking we basically just rule out those solutions that are already ruled out by
a constraint on a variable that we've already assigned a value to. What R
consistency basically means is it does the same thing but for the values of the
variables that we haven't even assigned a proper value to yet. Just by looking at
all the values that are still available to that variable we just rule out all
of the other values that we know are inconsistent with those. So we have a
sound algorithm that gives us a smarter network and it's even smarter than
forward checking. So what does the algorithm look like that does that? Well
given two variables u and v obviously what we just do is we check that for
each value d in the domain of u. If there is no d' in the domain of v
that satisfies our constraint then we just get rid of that one value. Now the
runtime of this simple revised algorithm for a fixed pair of two variables is k
squared where k is the maximum domain size we have in our network and of
course the one thing that we will assume every time we do any kind of
complexity algorithm here, complexity analysis, we will always assume that
checking the constraint is constant. Otherwise just add that to your
complexity. So what does this look like? We again look at our simple constraint
network here. So let's check what happens if I try to make v3 are consistent
relative to v2. So our algorithm says for each value in u, so in this case v3,
check whether there is no d' that satisfies the constraint. So
starting with one, is there one that satisfies the constraint as we already
talked about? No there isn't, so we know we get rid of one. Okay now checking two,
is there a value in v2 that satisfies the constraint? There is not, so we get
rid of two. Then we check three, is there a value into v2 that satisfies the
constraint? Yes there is, namely two, so three stays. Now of course the inference
algorithm yet that you could come up with that does that is basically just
take the constraint network with your current partial assignment as unary
constraints and so on and so forth, check for each constraint that you've left on
any two variables in your constraint network, revise them if they reduce,
wonderful, do that again and again and again until the revised algorithm
doesn't change anything anymore and once you do that return the resulting
constraint network. That's in so far it's a smart idea as of course if I do that
ultimately my network is going to be R consistent in the sense of that for
every pair of variables the two variables will be R consistent with
respect to the other one. So that works, runtime is well mk squared nk so
basically mk cubed, the mk squared comes for each of these inner loops and
to reach a fixed point i.e. to do the whole thing until nothing changes anymore
you have in the worst case this will happen once all n times k variable
variables have been removed. But one thing that should be noted we can be
Presenters
Zugänglich über
Offener Zugang
Dauer
00:22:01 Min
Aufnahmedatum
2020-10-31
Hochgeladen am
2020-10-31 10:57:29
Sprache
en-US
Definition and general remarks about Arc Consistency. Explanation of AC-1 and AC-3.