CONSTRAINT-SATISFACTION PROBLEM DEMOS ===================================== Two domain theories for CSPs are given to illustrate LPE with. The domain theories are formulated in a way suitable for the CSP to run, where a list of constraint variables is passed around (Vs) and the predicate valeq/3 accesses the value of a particular constraint variable. -------------------- To see LPE etc. evaluating a single constraint test, do: % prolog | ?- compile(cspdemo). | ?- compile(cities). % (or compile(zebras)) | ?- demo. Option? 1. % reset for LPE Option? 3. % Do LPE Option 3 generates a random training example, and then uses the domain theory to prove it fails a constraint (assuming we haven't hit on the solution by chance). The goal which is called is: fails(+Example, -VarsInvolved, -ConstraintNo) where Example is hypothesis solution to the CSP (a list of values for the constraint variables) VarsInvolved is a list of the variables(s) involved in the constraint which Example violated ConstraintNo is the number of the constraint which the example failed -------------------- To solve the CSP efficiently, we need to 1. Evaluate fails/3 as fast as possible 2. Find the `best' solution possible, ie. blaming the earliest variable assignments made. This allows greatest backtracking (`backjumping'), and permits large portions of the space to be pruned. EBG and NONE find a solution quickly, but not necessarily the best. This has serious consequences for the CSP algorithm (`the masking effect'), as sub-optimal backjumps are made wasting much effort. LPE and PE are guaranteed to find the `best' solution (see the ML92 paper), avoiding this problem. Although their solutions to the first couple of examples are slower than EBG/NONE, the speed for subsequent examples is extremely fast while still preserving the optimality of the solution found. THE FULL CONSTRAINT-SATISFACTION ENGINE ======================================= For those keen to see the CSP running in conjunction with LPE/EBG/PE/NONE, the code is supplied but only loosely documented. The code is supplied simply to illustrate LPE/EBG/PE/NONE in action, but if anyone is interested in more details then please email the authors. The constraint satisfaction algorithm (CSA) itself is called "Generalised Backjumping" (`GBJ') (Tech Rept TR-92-20 May 1992 P. Clark an R. Holte, Univ. Ottawa, Canada), in the file gbj.pl. The tech report is available on request. To run it and see LPE/PE/EBG in action: % prolog | ?- compile(gbj). % compile the GBJ CSA + LPE | ?- compile(cities). % or compile(zebras) | ?- gbj. % run For each revised clause produced by LPE/PE/EBG, the algorithm prints a `*'. Every 100 hypotheses examined, the system prints out the hypothesis number and the current partial hypothesis being examined. - PE will have a very long pause at the start while the theory is partially evaluated, and then solve the CSPs very quickly. - LPE interleaves learning (lazy partial evaluation) and problem-solving. It has a number of short pauses early on (LPEing), which get fewer as time goes on. The problem-solving part is, like PE, very quick. - EBG is a mixture: When operational clauses explain why a hypothesis soln fails the constraints, problem-solving is fast. However when this is not the case EBG is slow, having to find an explanation for failure from scratch using the original domain theory. Each explanation produces just a single noew operational concept definition (a single `*' is printed). EBG suffers from the sub-optimality of its solutions and the masking effect. Its run-time on the zebras CSP is approx. 1 cpu hour, mainly caused by this sub-optimality. - NONE (no learning) has no pauses for learning, but executes the domain theory more slowly. In addition, sub-optimal explanations for failure result in extra hypotheses being needlessly searched. Loose summary: LPE PE EBG NONE Learning speed fast fast slow - Learning amount medium high low - Execution speed fast fast fast slow Execution amount low low medium* medium* Learning speed: time to learn a new operational clause for fails/3. Learning amount: total number of fails/3 clauses learned. Execution speed: time to evaluate fails/3. Execution amount: total number of times fails/3 is evaluated by the CSA. * : medium because of the masking effect. (would be low for domains with no masking effect, ie. where all solutions are equally satisfactory). As discussed in the main LPE README file (under "MODES"), the implementations of PE and EBG here are mainly for illustration, and could be speeded up by alternative implementations. Thus precise speed comparisons between these modes would be unfair (although they do illustrate the right order-of-magnitude behaviour). To see the final domain theory, do: | ?- lpe_listing(fails/3). Versions -------- The versions of EBG and NONE implemented here differ from those published in the ML92 paper. Here, EBG and NONE settle for the *first* solution that they find. As a result, computation of fails/3 is fast but may return a sub-optimal solution -- resulting in sub-optimal backjumping and hence unnecessary search for the CSA. Hence EBG and NONE search test many more hypotheses than the minimum necessary (as searched by LPE and PE). In the ML92 paper, we used a variant where, if no operational definitions apply, EBG and NONE find *all* the solutions from the original domain theory and then pick just the best one. This removes the unnecessary search for NONE as we now always find the optimal solution to fails/3. For EBG it only reduces the unnecessary search, as the masking effect (see ML92 paper) still may produce a sub-optimal solution (sub-optimal, operational proofs will be used in preference to searching for the optimal proof in the original domain thoery). However, although the ML92 variant removes/reduces unnecessary search, it also pays a high price as computation of fails/3 is now significantly more time-consuming. Either way, EBG and NONE perform significantly worse than LPE and PE on these two problems.