/
Particle swarm optimisation (PSO) Particle swarm optimisation (PSO)

Particle swarm optimisation (PSO) - PowerPoint Presentation

pamella-moone
pamella-moone . @pamella-moone
Follow
371 views
Uploaded On 2018-03-18

Particle swarm optimisation (PSO) - PPT Presentation

Perry Brown Alexander Mathews Image httpwwwcs264org2009projectswebDingYiyangdingrobbpsojpg Introduction Concept first introduced by Kennedy and Eberhart 1995 Original idea was to ID: 655349

swarm velocity location gbest velocity swarm gbest location particle agent agents 2122131343242453111513232121613111327132112181421119121 pbests blue green red current position population pso pbest particles

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Particle swarm optimisation (PSO)" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Particle swarm optimisation (PSO)

Perry BrownAlexander Mathews

Image: http://www.cs264.org/2009/projects/web/Ding_Yiyang/ding-robb/pso.jpgSlide2

Introduction

Concept first introduced by Kennedy and Eberhart (1995)Original idea was to

develop a realistic visual simulation of bird flock behaviourSimulation was then modified to include a point in the environment that attracted the virtual bird agentsPotential for optimisation applications then became apparentSlide3

The natural metaphor

A flock of birds (or school of fish, etc.) searching for food

Their objective is to efficiently find the best source of foodNature-based theory underlying PSO:The advantage of sharing information within a group outweighs the disadvantage of having to share the reward

Image: http://www.nerjarob.com/nature/wp-content/uploads/Flock-of-pigeons.jpgSlide4

Terminology

The “

particles” in PSO have no mass or volume (essentially they are just points in space), but they do have acceleration and velocityBehaviour of groups in the developed algorithm ended up looking more like a swarm than a flockHence the name Particle Swarm OptimisationSlide5

Swarm intelligence

Millonas’ five basic principles of swarm intelligence:

Proximity: agents can perform basic space and time calculationsQuality: agents can respond to environmental conditionsDiverse response: population can exhibit a wide range of behaviourStability:

behaviour of population does not necessarily change every time the environment does

Adaptability:

behaviour of population must be able to change when necessary to adapt to environment

A PSO swarm satisfies all of the above conditionsSlide6

Population and environment

Multidimensional search space

Each point in the search space has some value associated with it, goal is to find the “best” valueNumerous agent particles navigating the search spaceEach agent has the following properties:a current position within the search spacea velocity vectorAdditionally, each agent knows the following information:the best value it has found so far (

pbest

) and its location

the best value any member of the population has found so far (

gbest

) and its locationSlide7

Kennedy and Eberhart’s (1995) refined algorithm

Some number of agent particles are initialised with individual positions and velocities (often just done randomly)

The following steps are then performed iteratively:The position of each agent is updated according to its current velocity:new position = old position + velocityThe value at each agent’s new position is checked, with pbest and gbest information updated if necessary

Each component of each agent’s velocity vector is then adjusted as a function of the differences between its current location and both the

pbest

and

gbest

locations, each weighted by a random variable:

new velocity = old velocity +

2 * rand1 * (

pbest

location - current location)

+

2 * rand2 * (

gbest

location - current location)

where

rand1

and

rand2

are random numbers between 0 and 1.

(Multiplying by the constant 2 causes particles to “overshoot” their target about half of the time, resulting in further exploration.)Slide8

A (partial) example in two dimensions

0

1

2

3

4

5

6

7

8

9

0

3

2

1

1

3

1

1

2

2122131343242453111513232121613111327132112181421119121

pbests

Blue: 0

Green: 0

Red: 0gbest: 0(dots indicate agents, yellow star indicates the global optimum)Slide9

Begin with random velocities

0

1

2

3

4

5

6

7

8

9

0

3

2

1

1

3

1

1

2

2122131343242453111513232121613111327132112181421119121

pbests

Blue: 0

Green: 0

Red: 0gbest: 0Slide10

Update particle positions

0

1

2

3

4

5

6

7

8

9

0

3

2

1

1

3

1

1

2

2122131343242453111513232121613111327132112181421119121

pbests

Blue: 1 at (6, 2)

Green: 0

Red: 2 at (8, 7)gbest: 2 at (8, 7)Slide11

Update particle velocities

0

1

2

3

4

5

6

7

8

9

0

3

2

1

1

3

1

1

2

2122131343242453111513232121613111327132112181421119121

pbests

Blue: 1 at (6, 2)

Green: 0

Red: 2 at (8, 7)gbest: 2 at (8, 7)For example, Blue’s

velocity in the

horizontal

dimension calculated by:

velocity =

1

+

2 *

rand()

* (6 – 6) +2 * rand() * (8 –

6)Slide12

Update particle positions again and repeat…

0

1

2

3

4

5

6

7

8

9

0

3

2

1

1

3

1

1

2

2122131343242453111513232121613111327132112181421119121

pbests

Blue: 3 at (8, 6)

Green: 1 at (4, 1)

Red: 2 at (8, 7)gbest: 3 at (8, 6)Slide13

Algorithm termination

The solution to the optimisation problem is (obviously) derived from

gbestPossible termination conditions that might be used:Solution exceeds some quality thresholdAverage velocity of agents falls below some threshold (agents may never become completely stationary)A certain number of iterations is completedSlide14

An example visualisation

http://www.youtube.com/watch?v=_

bzRHqmpwvoVelocities represented by trailing linesAfter some individual exploration, particles all converge on global optimumParticles can be seen oscillating about the optimumSlide15

Reference

Kennedy, J.;

Eberhart, R.; , "Particle swarm optimization," Neural Networks, 1995. Proceedings., IEEE International Conference on , vol.4, no., pp.1942-1948 vol.4, Nov/Dec 1995URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=488968&isnumber=10434