Random Spanning Trees Nick Harvey U Waterloo CampO Joint work with Isaac Fung TexPoint fonts used in EMF Read the TexPoint manual before you delete this box A A A What are sparsifiers ID: 1001409
Download Presentation The PPT/PDF document "Graph Sparsifiers by Edge-Connectivity..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
1. Graph Sparsifiers byEdge-Connectivity andRandom Spanning TreesNick HarveyU. Waterloo C&OJoint work with Isaac FungTexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA
2. What are sparsifiers?Approximating all cutsSparsifiers: number of edges = O(n log n /²2) ,every cut approximated within 1+². [BK’96]O~(m) time algorithm to construct themSpectral approximationSpectral sparsifiers: number of edges = O(n log n /²2), “entire spectrum” approximated within 1+². [SS’08]O~(m) time algorithm to construct them[BSS’09]Poly(n)n = # verticesLaplacian matrix of GLaplacian matrix of SparsifierWeighted subgraphs that approximately preserve some propertiesm = # edgesPoly(n)[BSS’09]
3. Why are sparsifiers useful?Approximating all cutsSparsifiers: fast algorithms for cut/flow problemProblemApproximationRuntimeReferenceMin st Cut1+²O~(n2)BK’96Sparsest CutO(log n)O~(n2)BK’96Max st Flow1O~(m+nv)KL’02Sparsest CutO~(n2)AHK’05Sparsest CutO(log2 n)O~(m+n3/2)KRV’06Sparsest CutO~(m+n3/2+²)S’09Perfect Matching in Regular Bip. Graphsn/aO~(n1.5)GKK’09Sparsest CutO~(m+n1+²)M’10v = flow valuen = # verticesm = # edges
4. Our MotivationBSS algorithm is very mysterious, and“too good to be true”Are there other methods to get sparsifiers with only O(n/²2) edges?Wild Speculation: Union of O(1/²2) random spanning trees gives a sparsifier (if weighted appropriately)True for complete graph [GRV ‘08]We prove: Speculation is false, butUnion of O(log2 n/²2) random spanning trees gives a sparsifier
5. Formal problem statementDesign an algorithm such thatInput: An undirected graph G=(V,E)Output: A weighted subgraph H=(V,F,w),where FµE and w : F ! RGoals:| |±G(U)| - w(±H(U)) | · ² |±G(U)| 8U µ V|F| = O(n log n / ²2)Running time = O~( m / ²2 )# edges between U and V\U in Gweight of edges between U and V\U in Hn = # verticesm = # edges | |±(U)| - w(±(U)) | · ² |±(U)| 8U µ V
6. Sparsifying Complete GraphSampling: Construct H by sampling every edge of Gwith prob p=100 log n/n. Give each edge weight 1/p.Properties of H:# sampled edges = O(n log n)|±G(U)| ¼ |±H(U)| 8U µ VSo H is a sparsifier of G
7. Generalize to arbitrary G?Can’t sample edges with same probability!Idea [BK’96]Sample low-connectivity edges with high probability, and high-connectivity edges with low probabilityKeep thisEliminate most of these
8. Non-uniform sampling algorithm [BK’96]Input: Graph G=(V,E), parameters pe 2 [0,1]Output: A weighted subgraph H=(V,F,w),where FµE and w : F ! RFor i=1 to ½ For each edge e2E With probability pe, Add e to F Increase we by 1/(½pe)Main Question: Can we choose ½ and pe’sto achieve sparsification goals?
9. Non-uniform sampling algorithm [BK’96]Claim: H perfectly approximates G in expectation!For any e2E, E[ we ] = 1) For every UµV, E[ w(±H(U)) ] = |±G(U)|Goal: Show every w(±H(U)) is tightly concentratedInput: Graph G=(V,E), parameters pe 2 [0,1]Output: A weighted subgraph H=(V,F,w),where FµE and w : F ! RFor i=1 to ½ For each edge e2E With probability pe, Add e to F Increase we by 1/(½pe)
10. Prior WorkBenczur-Karger ‘96Set ½ = O(log n), pe = 1/“strength” of edge e(max k s.t. e is contained in a k-edge-connected vertex-induced subgraph of G)All cuts are preservede pe · n ) |F| = O(n log n) (# edges in sparsifier)Running time is O(m log3 n)Spielman-Srivastava ‘08Set ½ = O(log n), pe = “effective resistance” of edge e(view G as an electrical network where each edge is a 1-ohm resistor)H is a spectral sparsifier of G ) all cuts are preservede pe = n-1 ) |F| = O(n log n) (# edges in sparsifier)Running time is O(m log50 n)Uses “Matrix Chernoff Bound”Assume ² is constantO(m log3 n)[Koutis-Miller-Peng ’10]Similar to edge connectivity
11. Our WorkFung-Harvey ’10 (independently Hariharan-Panigrahi ‘10)Set ½ = O(log2 n), pe = 1/edge-connectivity of edge eAll cuts are preservede pe · n ) |F| = O(n log2 n)Running time is O(m log2 n)Advantages:Edge connectivities natural, easy to computeFaster than previous algorithmsImplies sampling by edge strength, effective resistances,or random spanning trees worksDisadvantages:Extra log factor, no spectral sparsificationAssume ² is constant(min size of a cut that contains e)Why?Pr[ e 2 T ] = effective resistance of eand edges are negatively correlated
12. Our WorkFung-Harvey ’10 (independently Hariharan-Panigrahi ‘10)Set ½ = O(log2 n), pe = 1/edge-connectivity of edge eAll cuts are preservede pe · n ) |F| = O(n log2 n)Running time is O(m log2 n)Advantages:Edge connectivities natural, easy to computeFaster than previous algorithmsImplies sampling by edge strength, effective resistances…Extra trick: Can shrink |F| to O(n log n) by using Benczur-Karger to sparsify our sparsifier!Running time is O(m log2 n) + O~(n)Assume ² is constant(min size of a cut that contains e)O(n log n)
13. Our WorkFung-Harvey ’10 (independently Hariharan-Panigrahi ‘10)Set ½ = O(log2 n), pe = 1/edge-connectivity of edge eAll cuts are preservede pe · n ) |F| = O(n log2 n)Running time is O(m log2 n)Advantages:Edge connectivities natural, easy to computeFaster than previous algorithmsImplies sampling by edge strength, effective resistances…Panigrahi ’10A sparsifier with O(n log n /²2) edges, with running timeO(m) in unwtd graphs and O(m)+O~(n/²2) in wtd graphsAssume ² is constant(min size of a cut that contains e)
14. Notation: kuv = min size of a cut separating u and vMain ideas:Partition edges into connectivity classesE = E1 [ E2 [ ... Elog n where Ei = { e : 2i-1·ke<2i }
15. Notation: kuv = min size of a cut separating u and vMain ideas:Partition edges into connectivity classesE = E1 [ E2 [ ... Elog n where Ei = { e : 2i-1·ke<2i }Prove weight of sampled edges that each cuttakes from each connectivity class is about rightThis yields a sparsifierU
16. Prove weight of sampled edges that each cuttakes from each connectivity class is about rightNotation:C = ±(U) is a cut Ci = ±(U) Å Ei is a cut-induced setNeed to prove:C1C2C3C4
17. Notation: Ci = ±(U) Å Ei is a cut-induced setC1C2C3C4 Prove 8 cut-induced set CiKey IngredientsChernoff bound: Prove smallBound on # small cuts: Prove #{ cut-induced sets Ci induced by a small cut |C| }is small.Union bound: sum of failure probabilities is small, so probably no failures.
18. Counting Small Cut-Induced SetsTheorem: Let G=(V,E) be a graph. Fix any BµE. Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v) Then, for every ®¸1, |{ ±(U) Å B : |±(U)|·®K }| < n2®.Corollary: Counting Small Cuts [K’93] Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, for every ®¸1, |{ ±(U) : |±(U)|·®K }| < n2®.
19. ComparisonTheorem: Let G=(V,E) be a graph. Fix any BµE. Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v) Then |{ ±(U) Å B : |±(U)|·c }| < n2c/K 8c¸1.Corollary [K’93]: Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1.How many cuts of size 1? Theorem says < n2, taking K=c=1. Corollary, says < 1, because K=0.(Slightly unfair)
20. ConclusionsGraph sparsifiers important for fast algorithms and some combinatorial theoremsSampling by edge-connectivities gives a sparsifierwith O(n log2 n) edges in O(m log2 n) timeImprovements: O(n log n) edges in O(m) + O~(n) time[Panigrahi ‘10]Sampling by effective resistances also works ) sampling O(log2 n) random spanning trees gives a sparsifierQuestionsImprove log2 n to log n?Sampling o(log n) random trees gives a sparsifier with o(log n) approximation?