/
CHAPTER 6 Oliver Schulte CHAPTER 6 Oliver Schulte

CHAPTER 6 Oliver Schulte - PowerPoint Presentation

fauna
fauna . @fauna
Follow
73 views
Uploaded On 2023-10-31

CHAPTER 6 Oliver Schulte - PPT Presentation

Summer2011 Constraint Satisfaction Problems Midterm Announcements Please bring ID to the exam You can bring a cheat sheet Chapters 16 covered see Lecture Schedule CSP definition is covered also arc consistency ID: 1027428

search variables constraint variable variables search variable constraint csp values arc consistency tree problem graph consistent constraints red queens

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "CHAPTER 6 Oliver Schulte" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. CHAPTER 6Oliver SchulteSummer2011Constraint Satisfaction Problems

2. Midterm AnnouncementsPlease bring ID to the exam.You can bring a cheat sheet.Chapters 1-6 covered, see Lecture Schedule.CSP definition is covered, also arc consistency. Not covered: CSP search methods (topic today).No calculators, smartphones, textbook, notes.Be on time.Read the instructions ahead of time – posted on the web.

3. Talk AnnouncementCS SEMINAR  --------------- June 23rd, 2:30 p.m., TASC1 9204W Generalized Planning: Theory and PracticeHector J. Levesque ABSTRACT: While most of the research in automated planning within AI has focussed onmethods for finding sequential (or straight-line) plans, there is growinginterest in a more general account, where the plans may need to containbranches and loops. In this talk, I will present some recent work in thisarea, including some new theoretical results, as well as a description of anew planning system called FSAPLANNER and some of the generalized planningproblems it can solve.

4. OutlineCSP examplesBacktracking search for CSPsProblem structure and problem decompositionLocal search for CSPsField Institute Summer Workshops

5. Environment Type Discussed In this LectureStatic EnvironmentCMPT 310 - Blind Search5Fully ObservableDeterministicSequentialyesyesDiscrete Discrete yesPlanning, heuristic searchyesControl, cyberneticsnonoContinuous Function OptimizationVector Search: Constraint Satisfactionnoyes

6. Agent Architecture Discussed In this Lecture Graph-Based Search: State is black box, no internal structure, atomic. Factored Representation: State is list or vector of facts. CSP: a fact is of the form “Variable = value”. A model is a structured representation of the world.

7. Constraint satisfaction problems (CSPs)CSP:state is defined by variables Xi with values from domain Digoal test is a set of constraints specifying allowable combinations of values for subsets of variables.Allows useful general-purpose algorithms with more power than standard search algorithms.Power close to simulating Turing Machines.

8. Example: Map-Coloring

9. CSPs (continued)An assignment is complete when every variable is mentioned. A solution to a CSP is a complete assignment that satisfies all constraints.Some CSPs require a solution that maximizes an objective function.Constraints with continuous variables are common.Linear Constraints  linear programming. Examples of Applications: Airline schedules Final Exam Scheduling.CryptographySudoku, cross-words.

10. Example: Map-Coloring contd.

11. Varieties of constraintsUnary constraints involve a single variable,e.g., SA 6= greenBinary constraints involve pairs of variables,e.g., SA <> WAHigher-order constraints involve 3 or more variablesPreferences (soft constraints), e.g., red is better than green often representable by a cost for each variable assignmentconstrained optimization problems

12. Constraint graphBinary CSP: each constraint relates at most two variablesConstraint graph: nodes are variables, arcs show constraintsGeneral-purpose CSP algorithms use the graph structureto speed up search. E.g., Tasmania is an independent subproblem!

13. Graphs and Factored RepresentationsUBC AI Space CSPGraphs for variables (concepts, facts) capture local dependencies between variables (concepts, facts).Absence of edges = independence.AI systems try to reason locally as much as possible.Potential Solution to the Relevance Problem:How does the brain retrieve relevant facts in a given situation, out of the million facts that it knows?Answer: Direct links represent direct relevance.Computation in general is a local process operating on a factored state. (What is the state of a program run?)

14. cadebConsider the constraint graph on the right.The domain for every variable is [1,2,3,4].There are 2 unary constraints:- variable “a” cannot take values 3 and 4. variable “b” cannot take value 4.There are 8 binary constraints stating that variables connected by an edge cannot have the same value. Problem

15. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{1,2,3,4}X4{1,2,3,4}X2{1,2,3,4}5 Queen's Problem

16. Arc consistencyAn Arc X  Y is consistent if for every value x of X there is some value y consistent with x (note that this is a directed property) Consider state of search after WA and Q are assigned: SA  NSW is consistent if SA=blue and NSW=red

17. Arc consistencyX  Y is consistent if for every value x of X there is some value y consistent with xNSW  SA is consistent if NSW=red and SA=blue NSW=blue and SA=???

18. Arc consistencyCan enforce arc-consistency: Arc can be made consistent by removing blue from NSWContinue to propagate constraints….Check V  NSWNot consistent for V = red Remove red from V

19. Arc consistency Continue to propagate constraints….SA  NT is not consistentand cannot be made consistent

20. Standard search formulation (incremental)Let’s formulate a state graph problem, then take advantage of special structure later.States are defined by the values assigned so farInitial state: the empty assignment, { }Successor function: assign a value to an unassigned variable that does not conflict with current assignment.⇒ fail if no legal assignments (not fixable!)Goal test: the current assignment is complete This is the same for all CSPs!

21. Standard search formulation (incremental)Can we use breadth first search?Branching factor at top level? nd any of the d values can be assigned to any variableNext level?(n-1)dWe generate n!.dn leaves even though there are dn complete assignments. Why?CommutativelyIf the order of applications on any given set of actions has no effect on the outcome.

22. Backtracking searchVariable assignments are commutative, i.e.,[WA=red then NT =green] same as [NT =green thenWA=red]Only need to consider assignments to a single variable at each node b=d and there are dn leavesDepth-first search for CSPs with single-variable assignments is called backtracking searchIs this uninformed or informed? Backtracking search is the basic uninformed algorithm for CSPs

23. Backtracking example4 Feb 2004CS 3243 - Constraint Satisfaction23

24. Backtracking example4 Feb 2004CS 3243 - Constraint Satisfaction24

25. Backtracking example4 Feb 2004CS 3243 - Constraint Satisfaction25

26. Backtracking example4 Feb 2004CS 3243 - Constraint Satisfaction26

27.

28. Improving backtracking efficiency4 Feb 2004CS 3243 - Constraint Satisfaction28General-purpose

29. Most constrained variable4 Feb 2004CS 3243 - Constraint Satisfaction29Most constrained variable:choose the variable with the fewest legal values a.k.a. minimum remaining values (MRV) heuristic

30. Most constraining variable4 Feb 2004CS 3243 - Constraint Satisfaction30How to choose between the variable with the fewest legal values? Tie-breaker among most constrained variablesDegree heuristic: choose the variable with the most constraints on remaining variables

31. Least constraining value4 Feb 2004CS 3243 - Constraint Satisfaction31

32. Forward checking4 Feb 2004CS 3243 - Constraint Satisfaction32Idea: Keep track of remaining legal values for unassigned variables

33. Forward checking4 Feb 2004CS 3243 - Constraint Satisfaction33Idea: Keep track of remaining legal values for unassigned variables

34. Forward checking4 Feb 2004CS 3243 - Constraint Satisfaction34Idea: Keep track of remaining legal values for unassigned variables

35. Forward checking4 Feb 2004CS 3243 - Constraint Satisfaction35Idea: Keep track of remaining legal values for unassigned variables

36. Constraint propagation4 Feb 2004CS 3243 - Constraint Satisfaction36

37. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{1,2,3,4}X4{1,2,3,4}X2{1,2,3,4}

38. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{1,2,3,4}X4{1,2,3,4}X2{1,2,3,4}

39. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ ,2, ,4}X4{ ,2,3, }X2{ , ,3,4}

40. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ ,2, ,4}X4{ ,2,3, }X2{ , ,3,4}

41. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ , , , }X4{ , ,3, }X2{ , ,3,4}

42. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ ,2, ,4}X4{ ,2,3, }X2{ , , ,4}

43. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ ,2, ,4}X4{ ,2,3, }X2{ , , ,4}

44. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ ,2, , }X4{ , ,3, }X2{ , , ,4}

45. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ ,2, , }X4{ , ,3, }X2{ , , ,4}

46. Example: 4-Queens Problem13243241X1{1,2,3,4}X3{ ,2, , }X4{ , , , }X2{ , ,3,4}

47. Arc Consistency vs. SearchArc Consistency speeds up search, but is in itself not a complete search procedure.Example: Simple Problem 2 in AI space.Arc Consistency does not detect that the problem is insoluble.

48. Constraint propagationTechniques like CP and FC are in effect eliminating parts of the search spaceInference complements search (= simulation).Constraint propagation goes further than FC by repeatedly enforcing constraints locally.Arc-consistency (AC) is a systematic procedure for Constraint propagation (Macworth 1977 UBC).

49. Arc consistency checkingCan be run as a preprocessor or after each assignment Or as preprocessing before search startsAC must be run repeatedly until no inconsistency remainsTrade-offRequires some overhead to do, but generally more effective than direct searchIn effect it can eliminate large (inconsistent) parts of the state space more effectively than search canNeed a systematic method for arc-checking If X loses a value, neighbors of X need to be rechecked.

50. Arc consistency checking

51. Arc-consistency as message-passingThis is a propagation algorithm. It’s like sending messages to neighbors on the graph. How do we schedule these messages?Every time a domain changes, all incoming messages need to be re-sent. Repeat until convergence  no message will change any domains.Since we only remove values from domains when they can never be part of a solution, an empty domain means no solution possible at all  back out of that branch.

52. Example5-queen’s problem in AIspace.Apply the heuristics:Use variables with minimum remaining values.Use variable with maximum degree.Use least constraining value.Use arc-consistency after choosing each value.

53. K-consistency: bounded local checksArc consistency does not detect all inconsistencies:Partial assignment {WA=red, NSW=red} is inconsistent.Stronger forms of propagation can be defined using the notion of k-consistency. A CSP is k-consistent if for any set of k-1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable.E.g. 1-consistency = node-consistencyE.g. 2-consistency = arc-consistencyE.g. 3-consistency = path-consistencyStrongly k-consistent: k-consistent for all values {k, k-1, …2, 1}

54. Trade-offsRunning stronger consistency checks…Takes more timeBut will reduce branching factor and detect more inconsistent partial assignmentsNo “free lunch” In worst case n-consistency takes exponential time

55. Back-tracking or back-jumping?{Q=red , NSW= green, V= blue, T=red}redgreenbluered?bluegreen

56. Local search for CSPsUse complete-state representationInitial state = all variables assigned valuesSuccessor states = change 1 (or more) valuesFor CSPsallow states with unsatisfied constraints (unlike backtracking)operators reassign variable valueshill-climbing with n-queens is an exampleVariable selection: randomly select any conflicted variable.Local Stochastic Search DemoValue selection: min-conflicts heuristicSelect new value that results in a minimum number of conflicts with the other variables

57. Min-conflicts example 1Use of min-conflicts heuristic in hill-climbing.h=5h=3h=1

58. Min-conflicts example 2A two-step solution for an 8-queens problem using min-conflicts heuristicAt each stage a queen is chosen for reassignment in its columnThe algorithm moves the queen to the min-conflict square breaking ties randomly.

59. Local search for CSPfunction MIN-CONFLICTS(csp, max_steps) return solution or failure inputs: csp, a constraint satisfaction problem max_steps, the number of steps allowed before giving up current  an initial complete assignment for csp for i = 1 to max_steps do if current is a solution for csp then return current var  a randomly chosen, conflicted variable from VARIABLES[csp] value  the value v for var that minimize CONFLICTS(var,v,current,csp) set var = value in current return failure

60. Advantages of local searchLocal search can be particularly useful in an online settingAirline schedule exampleE.g., mechanical problems require than 1 plane is taken out of serviceCan locally search for another “close” solution in state-spaceMuch better (and faster) in practice than finding an entirely new schedule.The runtime of min-conflicts is roughly independent of problem size.Can solve the millions-queen problem in roughly 50 steps.Why?n-queens is easy for local search because of the relatively high density of solutions in state-space.

61. Graph structure and problem complexityDivide-and-conquer: Solving disconnected subproblems.Suppose each subproblem has c variables out of a total of n.Worst case solution cost is O(n/c dc), i.e. linear in nInstead of O(d n), exponential in nE.g. n= 80, c= 20, d=2280 = 4 billion years at 1 million nodes/sec.4 * 220= .4 second at 1 million nodes/sec

62. Tree-structured CSPsTheorem: if a constraint graph has no loops then the CSP can be solved in O(nd 2) timelinear in the number of variables!Compare difference with general CSP, where worst case is O(d n)

63. Algorithm for Solving Tree-structured CSPsChoose some variable as root, order variables from root to leaves such that every node’s parent precedes it in the ordering.Label variables from X1 to Xn.Every variable now has 1 parentBackward PassFor j from n down to 2, apply arc consistency to arc [Parent(Xj), Xj) ] Remove values from Parent(Xj) if needed to make graph directed arc consistent.Forward PassFor j from 1 to n assign Xj consistently with Parent(Xj )

64. Tree CSP ExampleGB

65. Tree CSP ExampleBRGBGBRGRGBBackward Pass(constraintpropagation)

66. Tree CSP ExampleBRGBGBRGRGBB GRGBRForward Pass(assignment)Backward Pass(constraintpropagation)

67. Tree CSP complexityBackward passn arc checksEach has complexity d2 at worstForward passn variable assignments, O(nd)Overall complexity is O(nd 2)Algorithm works because if the backward pass succeeds, then every variable by definition has a legal assignment in the forward pass

68. What about non-tree CSPs?General idea is to convert the graph to a tree2 general approachesAssign values to specific variables (Cycle Cutset method).Tries to exploit context-specific independence.Construct a tree-decomposition of the graph- Connected subproblems (subgraphs) becomes nodes in a tree structure.

69. Cycle-cutset conditioningChoose a subset S of variables from the graph so that graph without S is a treeS = “cycle cutset”For each possible consistent assignment for SRemove any inconsistent values from remaining variables that are inconsistent with SUse tree-structured CSP to solve the remaining tree-structureIf it has a solution, return it along with SIf not, continue to try other assignments for S

70.

71. Finding the optimal cutsetIf c is small, this technique works very wellHowever, finding smallest cycle cutset is NP-hardBut there are good approximation algorithms

72. Tree DecompositionsRed, green, blueRed, blue, green,blue, red, green…Red, green, blueRed, blue, green,blue, red, green…

73. Rules for a Tree DecompositionEvery variable appears in at least one of the subproblems.If two variables are connected in the original problem, they must appear together (with the constraint) in at least one subproblem.If a variable appears in two subproblems, it must appear in each node on the path between them.

74. Tree Decomposition AlgorithmView each subproblem as a “super-variable”Domain = set of solutions for the subproblemObtained by running a CSP on each subproblem.Maximum-size of subproblem - 1 = treewidth of constraint graph.E.g., 6 solutions for 3 fully connected variables in map problemNow use the tree CSP algorithm to solve the constraints connecting the subproblemsDeclare a subproblem a root node, create treeBackward and forward passesExample of “divide and conquer” strategy

75. Tree DecompositionEvery graph has a tree decomposition, not just constraint graphs.Tree decomposition shows optimal divide-and-conquer strategy.For CSPs, optimization, search, inference.Typical result: if treewidth of a problem graph is constant, then the search/optimization/inference problem is solvable by divide and conquer. Treewidth in General

76. SummaryCSPs special kind of problem: states defined by values of a fixed set of variables, goal test defined by constraints on variable values: factored representation.Represents many practical problems Microsoft Research. University of Essex.Backtracking=depth-first search with one variable assigned per nodeHeuristicsVariable ordering and value selection heuristics help significantlyConstraint propagation does additional work to constrain values and detect inconsistenciesWorks effectively when combined with heuristicsIterative min-conflicts is often effective in practice.Graph structure of CSPs determines problem complexitye.g., tree structured CSPs can be solved in linear time.