/
Learning   Combinatorial Learning   Combinatorial

Learning Combinatorial - PowerPoint Presentation

Goofball
Goofball . @Goofball
Follow
342 views
Uploaded On 2022-08-01

Learning Combinatorial - PPT Presentation

Optimization Algorithms over Graphs Hanjun Dai Joint work with Elias Khalil Yuyu Zhang Bistra Dilkina Le Song Georgia Tech To appear in NIPS 2017 equal contribution ID: 931710

vertex graph learning solution graph vertex solution learning feature cplex problem results nodes state good embedding greedy model policy

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Learning Combinatorial" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Learning Combinatorial Optimization Algorithms over Graphs

Hanjun Dai*Joint work with Elias Khalil*, Yuyu Zhang, Bistra Dilkina, Le SongGeorgia TechTo appear in NIPS 2017

*

equal contribution

Slide2

A motivational exampleMinimum Vertex CoverFind smallest vertex subset S s.t. each edge has at least one end in S

2

Models advertising optimization in social networks

Slide3

MotivationA realistic settingSame problem is solved repeatedly with slightly different

dataDelivery truck in Downtown Atlanta: Daily routing in the same area with slightly different customersDelivery in same road network with

slightly

different

conditions

3

Tackling NP-Hard problems

Design rationale

Exact algorithms

Tight formulations, good IP solvers

Approximation algorithms

Worst-case guarantees

Heuristics

Empirical performance

Can

classical algorithms exploit

the common

distribution

of instances?

Not automatically!

Slide4

Proposal: Learning Greedy AlgorithmsMinimum Vertex Cover2-approx: greedily add vertices of edge

with max degree sum4

Goal

:

learn

a better

criterion

for greedy solution construction over a

graph distribution

Slide5

Problem StatementMinimum Vertex

CoverMaximum Cut

Traveling Salesman Prob.

Insert

nodes into cover

Insert

nodes into subset

Insert nodes into sub-tour

5

Given a

graph optimization problem

and a

distribution

of problem instances, can we

learn better

greedy heuristics

that generalize to unseen instances from

?

 

Slide6

Challenge #1: How to LearnPossible approach: Supervised learningGiven a partial solution, predict next vertex to add to solution

Data: collect (partial solution, next vertex) pairs features labelTask: multi-class classificationPointer Network[Vinyals, et al., NIPS 2015]: a smarter approach with recurrent neural networks6

PROBLEM

Need to compute good/optimal solutions to NP-Hard problems in order to learn!!

Slide7

[Minh, et al. Nature 2015]

Greedy policy:

 

State

: current screen

 

Reward

:

score you earned at current step

 

Action value function

:

your predicted future total rewards

 

Action

:

move your board left / right

 

Policy

: How to choose your action

 

Reinforcement

Learning Background

Slide8

Reinforcement Learning Formulation8

Repeat until all edges

covered:

Compute

score

for each vertex

Select vertex with

largest score

Add best vertex to cover

 

Reward:

 

State

: current selected nodes

 

Action

value function:

 

Greedy policy

:

 

Update state

 

Minimum Vertex Cover

SOLUTION

Improve policy by learning from experience => no need to compute optima

Slide9

Reinforcement Learning Algorithm9

Sample graph instanceExplore or

Exploit

according to current policy

Update

state

Optimize

model parameters

:

model parameters

Depend on features

 

Slide10

Challenge #2: How to RepresentAction value function:

Estimate of goodness of vertex

in state

Representation of

A feature vector that describes

in state

Possible approach:

Feature engineering

D

egree, 2-hop neighborhood size, other centrality measures

 

10

PROBLEMS

1-

Task-specific engineering

needed

2- Hard to tell what is a good feature

3- Difficult to generalize across diff. graph sizes

Slide11

Deep Node Representations11

 

3

0

0

Non-linearity

:

 

Node’s own tag

 

Neighbors’ features

Neighbors’ edge weights

Updating feature vector

Repeat embedding

times:

 

1

1

:

model parameters

 

[Dai, et al., ICML 2016]

Slide12

Deep Node Representations12

 

3

0

0

Non-linearity

:

 

Update feature vector

Repeat embedding

times:

 

1

1

:

model parameters

 

For each node:

Compute Q-value:

Sum-pooling over nodes

SOLUTION

1-

No feature engineering

needed

2- Features’ parameters

trained to be good

3

- Can handle

different graph sizes

Slide13

Overall Framework13

Slide14

Experimental Setup14

Minimum Vertex Cover (MVC)

Maximum Cut (MAXCUT)

Traveling Salesman Problem

(TSP)

Graph

types

Erdos-Renyi

(ER) or

Barabasi

-Albert (BA)

ER or BA

DIMACS generator; uniform grid or clustered

Solvers

ILP with CPLEX

IQP with CPLEX

Concorde

Feature embedding size: 64

Embedding iterations

: 3 to 5

Full details in paper

 

Slide15

Results: Solution Quality [MVC - BA]15

approximation ratio

 

Our method is near-optimal,

barely visible.

Slide16

Results: Solution Quality [MAXCUT - BA]16

Slide17

Results: Solution Quality [TSP - clustered]17

Slide18

Results: Realistic InstancesMemeTracker: graph of news propagation between media http://snap.stanford.edu/netinf/#data Physics: Ising

spin glass model http://www.optsicom.es/maxcut/#instances18

Slide19

Results: Algorithm Behavior19

Slide20

Results: Algorithm Behavior20

Slide21

Learning graph opt: quantitative comparison

Train on small graphs with 50-100 nodesGeneralize to not only graphs from same distributionBut also larger graphs Approximation ratio < 1.007

Slide22

Learning graph opt: time-solution tradeoff

Embedded MF

CPLEX

1st

CPLEX

2nd

CPLEX

3rd

CPLEX

4th

2-approx

2-approx +

Embedding

produces

algorithm

with good tradeoff

!

RNN

Generate 200

Barabasi

-Albert networks with 300 nodes

Let CPLEX produces 1

st

, 2

nd

, 3

rd

, 4

th

feasible solutions

Slide23

ConclusionA learning framework that exploits graph structureApplies directly to many graph optimization problemsPromising tool for automated algorithm designNIPS Preprint:

https://arxiv.org/abs/1704.01665 Code: https://github.com/Hanjun-Dai/graph_comb_opt 23