/
Bounded Approximation Algorithms Bounded Approximation Algorithms

Bounded Approximation Algorithms - PowerPoint Presentation

tatyana-admore
tatyana-admore . @tatyana-admore
Follow
419 views
Uploaded On 2016-06-15

Bounded Approximation Algorithms - PPT Presentation

Sometimes we can handle NP problems with polynomial time algorithms which are guaranteed to return a solution within some specific bound of the optimal solution within a constant c of the optimal ID: 363908

312 approximation search local approximation 312 local search learning set algorithms initial net assume fitness objective solution optima algorithm selection optimal cluster

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Bounded Approximation Algorithms" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Bounded Approximation Algorithms

Sometimes we can handle NP problems with polynomial time algorithms which are guaranteed to return a solution within some specific bound of the optimal solutionwithin a constant c of the optimalwithin logn of optimal, etc.Branch and Bound envelopesThe bound is the approximation factorRemember in section 5.4 we discussed an efficient greedy polynomial time algorithm for Set Cover which is guaranteed to find a solution within ln(n) of optimal

CS 312 – Approximation

1Slide2

Set Cover

Assume a universe U of elementsAssume a set S of subsets Si of the universeSet Cover is the problem of finding the minimum number of the subsets, whose union includes all elements of the universeU = {1, 2, 3, 4, 5}S = {{1, 2, 3}, {2, 4}, {3, 4}, {4, 5}}What is the minimum set cover in this case?Greedy Algorithm?Is it optimal?CS 312 – Greedy Algorithms2Slide3

Set Cover Example

CS 312 – Greedy Algorithms3Each town has a subset which are all towns to which they are directly connectedUntil all elements in U covered, select

Si with the largest number of uncovered elements.Slide4

Set Cover

Set Cover is another NP-complete problem, thus there exists no polynomial solution to find the optimumWe can show that our simple greedy algorithm gives an answer within ln(n) of the optimal solutionThus, if k is our optimal solution, our simple greedy algorithm is guaranteed to find a value between k and kln(n)ln(n) is the approximation factorThere is provably no polynomial time algorithm for Set Cover with a smaller approximation factor!CS 312 – Greedy Algorithms4Slide5

Clustering Example

In clustering we want to find a natural way to group a set of instances into reasonable clustersA type of unsupervised learningPut similar instances into the same classDistance functionEuclidean distance is common, but others can be usedOne variation is k-ClusterUser specifies the number of clusters k to be foundOften better if k can be discovered automaticallyCS 312 – Approximation5Slide6

k

-ClusterInput: Points X = {x1, …, xn}, integer kOutput: A partition of X into k

clusters C

1

,…,

C

k

Goal: Minimize the diameter of the clusters

Clusters are spheres all with the same diameter such that all points are enclosed in the spheres

This problem is exponentially hard – it is in NP

CS 312 – Approximation

6Slide7

k-Cluster Approximation Algorithm

Choose k of the data points as cluster centers ("representatives")Choose first point arbitrarilyChoose each succeeding center to be the point furthest from the centers chosen so far (the point furthest from a/its current center)Assign all other data points to their closest cluster center – Complexity?CS 312 – Approximation7Slide8

k

-Cluster Approximation FactorDefinitely clusters, but is max diameter within a factor of optimal?Let x be the point furthest from the k cluster centers (i.e. x would be the next center that would have been chosen)Let r be the distance of x to the closest kThen, every point must be within r of its cluster center. Why?Thus the maximum diameter of any cluster is ≤ 2rWe have k+1 points that are all at a distance at least r from each otherAny partition into k clusters must include 2 of these points in the same cluster with diameter at least r. (lower bound)Thus, the optimal diameter is between r and 2r

inclusive and we have an approximation factor of 2

CS 312 – Approximation

8Slide9

Approximation Algorithms

Does this mean we can calculate the optimum once we have our approximation? CS 312 – Approximation9Slide10

Approximation Algorithms

Does this mean we can calculate the optimum once we have our approximation? If so, we would have a solution in P and could show that P=NPWhat can we say about the optimum in this type of case?The optimal diameter is within a factor of 2 of the max diameter found by k-ClusterThis gives us another way to deal with NP problemsSeek polynomial time algorithms that are guaranteed to have a solution within some factor of optimalThis approach can be used for many NP problemsCS 312 – Approximation10Slide11

Local Search

A powerful approach which can be used for any optimization problemFrom a current state try a small move to a neighboring state which improves the overall objective functionNotion of neighborhood will differ depending on problemNeighborhood (Adjustment rate) must also be set Too small – SlowToo big – Jump over good solutionsCS 312 – Approximation

11Slide12

Local Search Example: Solving a Linear Program

You own a chocolate shop which produces two types of box chocolates:Normal box which gives a $1 profitDeluxe box which gives a $6 profitThe variables are the number of boxes produced per dayx1 is the number of boxes of normal chocolatex2 is the number of boxes of deluxe chocolateThe objective is to set x1 and x2 so to maximize profitmax (x1 + 6x2) Profit = x1 + 6x2The constraints are:x1 ≤ 200 Maximum demand of normal boxes per day

x2 ≤ 300 Maximum demand of deluxe boxes per dayx1

+

x

2

≤ 400 Maximum production capacity

x

1

,

x

2

≥ 0 Can't have a negative number of boxes

CS 312 – Approximation

12

Assume fractional to start with, neighbor distance

Could naturally support ILP, non-linear constraints/objectives, etc.Slide13

Local Search with TSP

A legal TSP path is a permutation of the cities: ABCDEA close legal neighbor is to swap two cities in the permutation (e.g. ABCDE => ADCBE)How many 2-permutes?TSP approach would apply a 2-permute if it shortens the path length and keep doing this until no 2-permute leads to improvement (at a minima)Less minima and better solutions if allow 3-permutes where 3 cities can differ At what trade-off?4-permute, etc. – n-permute is optimal and exponential

Section 9.3.1 proposes 2-change which drops 2 edges from current path, and adds 2 edges to create a legal path – similar and a bit tighter of a neighborhood than 2-permute: type of 2-opt

Other variations – 2-opt, 3-opt,

k

-opt, etc., (all variations of

k

edges)

Lots of are other TSP local search approaches: Genetic Algorithms, Simulated Annealing, Ant Colony, etc.

CS 312 – Approximation

13Slide14

Properties of Local Search

Can be relatively simple and often one of the most effective algorithms across many applications, regardless of application complexity (non-linear, stochastic, etc.)How long will it take?Depends on the number of iterations which is not usually exactly predictableSince each iteration improves the objective function the algorithm works in a constrained space of the problem, but the number of iterations can still be largeOften finds a solution relatively efficientlyLocal OptimaThe good news is the algorithm improves the objective value with each iteration and stops as soon as it can no longer improveThe bad news is this could be a local optimum which is not a very good solutionCan local search be optimal?Yes when solution is convex (no local optima) and neighborhood size is sufficiently smallCS 312 – Approximation14Slide15

Dealing with Local Optima

The amount of local optima will depend on the problemSome good news coming for certain situationsNote that the search is highly effected by the initial state (usually chosen randomly), since the search will be tied to that initial neighborhood, and often stays close to that neighborhoodWould if an algorithm has a 50% chance of hitting a local optima for a certain problemCS 312 – Approximation15Slide16

Dealing with Local Optima

The amount of local optima will depend on the problemSome good news coming for many situationsNote that the search is highly effected by the initial state (usually chosen randomly), since the search will be tied to that initial neighborhood, and often stay close to that neighborhoodWould if an algorithm has a 50% chance of hitting a local optima for a certain problemJust run it multiple times with different random start statesIf chance of hitting a true optima is p, then running the problem k times gives probability 1-(1-p)k of finding an optimal solutionAnother approach is to add some randomness to the algorithm and occasionally allow it to move to a neighbor which increases the objective, allowing it to potentially escape local optimaCS 312 – Approximation16Slide17

Gradient Descent – Common Powerful Tool

Slightly change the state in the direction which maximizes the improvement in the objective function – steepest gradientUsually done by taking the partial derivative of the objective function with respect to the adjustable variables/parametersThen we just change the variables in the direction (step size – not too big) of the derivative/gradient which gives the best local changeA type of Greedy SearchWe then recalculate the gradient at the new point and repeat until we have reached a minimum (optima)17Slide18

Approximation and Local Search Review

With so many problems being in NP, approximation is becoming very commonLocal Search becoming more and more popularFast computersVery complex problemsSampling and stochastic approachesNeural NetworksGenetic AlgorithmsCS 312 – Approximation18