enhancement of backtracking
principle: if a subproblem is inconsistent, better find it out early
applied to backtracking:
second is better, avoid visiting 9215 nodes
how?
if no satisfying assignment in the descendant of a node
⇓
better find that out immediately than visiting many descendants
how: during the search, learn new constraints
consequences of original constraints ⇒ have to be satisfied anyway
if they are falsified in a node ⇒ no satisfying assigment from the node
detected immediately, without visiting the subtree
a part of the tree of recursive calls:
in the leaves,
x1 = 1 and x4 = 2
suffice for inconsistency
example: constraints x1 ≥ x5
and x5 ≥ x4
new constraint
a possible situation during the rest of the search
the new constraint allows cutting the search short
choose x5 as the variable after x4 = 2
but:
learn a constraint only if its usefulness (search cuts) is reckoned larger than its cost (checking at ever other node)
reckoned: no way to know in advance
automatically created constraint have the form:
exactly one tuple is not allowed
for such constraints, shorter means:
but mostly 1.
only create a constraint if its variables are below a certain bound
example: only create constraint of at most 3 variables
this is the 3-bounded learning
discard learned constraints when they no longer seem useful
each learned constraints forbids a single assigment
forbiden assigment is very different from the current assigment
⇓
constraint is unlikely to be still useful
relevance bounded learning of order i:
discard a learned constraint if its forbidden assigment differs from the
current assignment on i variables
differs = variables assigned in both, but to different values
which constraints are true, and can therefore be learned?
backjumping produces sets of variables that are sufficient for inconsistency
consider for learning: constraints forbidding the current assigment to these assignments