/
SEDA: An Architecture for Well-Conditioned, Scalable Intern SEDA: An Architecture for Well-Conditioned, Scalable Intern

SEDA: An Architecture for Well-Conditioned, Scalable Intern - PowerPoint Presentation

danika-pritchard
danika-pritchard . @danika-pritchard
Follow
400 views
Uploaded On 2015-10-29

SEDA: An Architecture for Well-Conditioned, Scalable Intern - PPT Presentation

Matt Welsh David Culler and Eric Brewer Computer Science Division University of California Berkeley Symposium on Operating Systems Principles SOSP October 2001 httpwwweecsharvardedumdwprojseda ID: 176495

thread seda stage event seda thread event stage performance queue haboob pool throughput load web response design controller application

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "SEDA: An Architecture for Well-Condition..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

Matt

Welsh, David Culler, and Eric Brewer

Computer Science Division

University of California, Berkeley

Symposium on Operating Systems Principles (SOSP), October 2001

http://www.eecs.harvard.edu/~mdw/proj/seda

/

[All graphs and figures from this

url

]Slide2

Why Discuss Web Server in OS Class?

Paper discusses design of well-conditioned Web servers

Thread-based and event-driven concurrency are central to OS design

Task scheduling and resource management issues also very importantSlide3

Well-conditioned service (is the goal)

Should behave in pipeline fashion:

If underutilized

Latency (s) = N x max stage delayThroughput (requests/s) proportional to load At saturation and beyond

Latency proportional to queue delay

Throughput = 1 / max stage

delay

Graceful degradation

Not how Web typically performs during

“Slashdot effect”Slide4

Thread-based concurrency

One thread per request

Offers simple, supported programming model

I/O concurrency handled by OS scheduling

Thread state captures FSM

state

Synchronization required for shared resource access

Cache/TLB

misses, thread scheduling, lock

contention overheadsSlide5

Threaded server throughput versus load

Latency is

unbounded as number of threads

increases

Throughput decreases

Thrashing – more cycles spent on overhead than real work

Hard

to decipher performance bottlenecksSlide6

Bounded thread pool

Limit the number of threads to prevent thrashing

Queue incoming requests or reject outright

Difficult to provide optimal performance across differentiated servicesInflexible design during peak usageStill difficult to profile and tuneSlide7

Event-driven concurrency

Each FSM structured as network of event handlers and represents a single flow of execution in the system

Single thread per FSM, typically one FSM per CPU, number of FSM’s is small

App must schedule event execution and balance fairness against response time

App must maintain FSM state across I/O access

I/O must be non-blocking

Modularity

difficult to achieve and maintain

A poorly designed stage can kill app performanceSlide8

Event-driven server throughput versus load

Avoids performance degradation of thread-driven approach

Throughput is constant

Latency is linearSlide9

Structured event queue overview

Partition the application into discrete stages

Then add event queue before each stage

Modularizes designOne stage may enqueue events onto another stage’s input queue

Each stage may have a local thread poolSlide10

A SEDA stage

Stage consists of:

Event queue (likely finite size)

Thread pool (small)

Event handler (application specific)

Controller (local

dequeueing

and thread allocation)Slide11

A SEDA application

SEDA application is composed of network of SEDA stages

Event handler may

enqueue

event in another stage’s queue

Each stage controller may

Exert backpressure (block on full queue)

Event shed (drop on full queue)

Degrade service (in application specific manner)

Or some other actionQueues decouple stages, providingModularityStage-level load managementProfile analysis/monitoring

With increased latencySlide12

SEDA resource controllers

Controllers dynamically tune resource usage to meet performance targets

May use both local stage and global state

Paper introduces implementations of two controllers (others are possible)

Thread pool – create/delete threads as load requires

Batching –

v

ary number of events processed per stage invocationSlide13

Asynchronous I/O

SEDA provides I/O stages:

Asynchronous socket I/O

Uses non-blocking I/O provided by OSAsynchronous file I/OUses blocking I/O with a thread poolSlide14

Asynchronous socket I/O performance

SEDA non-blocking I/O vs. blocking I/O and bounded thread pool

SEDA implementation provides fairly constant I/O bandwidth

Thread pool

implementation

exhibits typical thread thrashingSlide15

Performance comparison

SEDA

Haboob

vs Apache & Flash Web serversHaboob is complex, 10 stage design in JavaApache uses bounded process pools in COne process per connection, 150 max

Flash uses event-driven design in C

Note: authors claim creation of “

Haboob

” was greatly simplified due to modularity of SEDA architectureSlide16

I got a

Haboob

.

ha·boob

   /

həˈbub

/

[huh-boob

]

–noun a thick dust storm or sandstorm that blows in the deserts of North Africa and Arabia or on the plains of India. From www.dictionary .comSlide17

Performance comparison (cont.)

Apache fairness declines quickly past 64 clients

Throughput constant at high loads for all servers,

Haboob

is best

Apache and Flash exhibit huge variation in response times (long tails)

Haboob

provides low variation in response times at cost of longer average response timesSlide18

Performance comparison (cont.)

Apache,

Haboob

w/o controller process all requests, buggy Flash drops ¾

Haboob

response time with controller better behaved

Controller drops requests with error notification under heavy load

Here 98% of requests are shed by the controller at bottleneck

Still not able to offer guarantee of service better than target (22 vs. 5)Slide19

Conclusion

SEDA provides a viable and modularized model for Web service design

SEDA represents a middle ground between thread- and event-based Web services

SEDA offers robust performance under heavy load, optimizing fairness over quick responseSEDA allows novel dynamic control mechanisms to be elegantly incorporated