/
Scheduling CS   111 Operating Scheduling CS   111 Operating

Scheduling CS 111 Operating - PowerPoint Presentation

olivia-moreira
olivia-moreira . @olivia-moreira
Follow
364 views
Uploaded On 2018-03-20

Scheduling CS 111 Operating - PPT Presentation

Systems Peter Reiher Outline What is scheduling What are our scheduling goals What resources should we schedule Example scheduling algorithms and their implications What Is Scheduling ID: 658074

time process priority scheduling process time scheduling priority run processes queue job real scheduler preemptive system response slice soft

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Scheduling CS 111 Operating" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

SchedulingCS 111Operating Systems Peter Reiher

Slide2

OutlineWhat is scheduling?What are our scheduling goals?What resources should we schedule?Example scheduling algorithms and their implicationsSlide3

What Is Scheduling?An operating system often has choices about what to do nextIn particular:For a resource that can serve one client at a timeWhen there are multiple potential clientsWho gets to use the resource next?And for how long?Making those decisions is schedulingSlide4

OS Scheduling ExamplesWhat job to run next on an idle core?How long should we let it run?In what order to handle a set of block requests for a disk drive?If multiple messages are to be sent over the network, in what order should they be sent?Slide5

How Do We Decide How To Schedule?Generally, we choose goals we wish to achieveAnd design a scheduling algorithm that is likely to achieve those goalsDifferent scheduling algorithms try to optimize different quantitiesSo changing our scheduling algorithm can drastically change system behaviorSlide6

The Process QueueThe OS typically keeps a queue of processes that are ready to runOrdered by whichever one should run nextWhich depends on the scheduling algorithm usedWhen time comes to schedule a new process, grab the first one on the process queueProcesses that are not ready to run either:Aren’t in that queueOr are at the endOr are ignored by schedulerSlide7

Potential Scheduling GoalsMaximize throughputGet as much work done as possibleMinimize average waiting timeTry to avoid delaying too many for too longEnsure some degree of fairnessE.g., minimize worst case waiting timeMeet explicit priority goalsScheduled items tagged with a relative priorityReal time schedulingScheduled items tagged with a deadline to be metSlide8

Different Kinds of Systems, Different Scheduling GoalsTime sharingFast response time to interactive programsEach user gets an equal share of the CPUBatchMaximize total system throughputDelays of individual processes are unimportantReal-timeCritical operations must happen on timeNon-critical operations may not happen at allSlide9

Preemptive Vs. Non-Preemptive SchedulingWhen we schedule a piece of work, we could let it use the resource until it finishesOr we could use virtualization techniques to interrupt it part way throughAllowing other pieces of work to run insteadIf scheduled work always runs to completion, the scheduler is non-preemptiveIf the scheduler temporarily halts running jobs to run something else, it’s preemptiveSlide10

Pros and Cons of Non-Preemptive SchedulingLow scheduling overheadTends to produce high throughputConceptually very simplePoor response time for processesBugs can cause machine to freeze upIf process contains infinite loop, e.g.Not good fairness (by most definitions)May make real time and priority scheduling difficultSlide11

Pros and Cons of Pre-emptive SchedulingCan give good response timeCan produce very fair usageWorks well with real-time and priority schedulingMore complexRequires ability to cleanly halt process and save its stateMay not get good throughputSlide12

Scheduling: Policy and MechanismThe scheduler will move jobs into and out of a processor (dispatching)Requiring various mechanics to do soHow dispatching is done should not depend on the policy used to decide who to dispatchDesirable to separate the choice of who runs (policy) from the dispatching mechanismAlso desirable that OS process queue structure not be policy-dependentSlide13

Scheduling the CPU ready queuedispatcher

context

switcher

CPU

yield (or preemption)

resource

manager

resource request

resource granted

new

processSlide14

Scheduling and PerformanceHow you schedule important system activities has a major effect on performancePerformance has different aspectsYou may not be able to optimize for all of themScheduling performance has very different characteristic under light vs. heavy loadImportant to understand the performance basics regarding schedulingSlide15

General Comments on PerformancePerformance goals should be quantitative and measurableIf we want “goodness” we must be able to quantify itYou cannot optimize what you do not measureMetrics ... the way & units in which we measureChoose a characteristic to be measured It must correlate well with goodness/badness of serviceFind a unit to quantify that characteristic It must a unit that can actually be measuredDefine a process for measuring the characteristicThat’s enough for nowBut actually measuring performance is complexSlide16

How Should We Quantify Scheduler Performance?Candidate metric: throughput (processes/second)But different processes need different run timesProcess completion time not controlled by schedulerCandidate metric: delay (milliseconds)But specifically what delays should we measure?Some delays are not the scheduler's fault Time to complete a service request Time to wait for a busy resourceDifferent parties care about these metricsSlide17

An Example – Measuring CPU SchedulingProcess execution can be divided into phasesTime spent running The process controls how long it needs to runTime spent waiting for resources or completions Resource managers control how long these takeTime spent waiting to be run This time is controlled by the schedulerProposed metric:Time that “ready” processes spend waiting for the CPUSlide18

Typical Throughput vs. Load Curve throughput

offered load

ideal

typical

Maximum possible capacitySlide19

Why Don’t We Achieve Ideal Throughput?Scheduling is not freeIt takes time to dispatch a process (overhead)More dispatches means more overhead (lost time)Less time (per second) is available to run processesHow to minimize the performance gapReduce the overhead per dispatchMinimize the number of dispatches (per second) This phenomenon is seen in many areas besides process schedulingSlide20

Typical Response Time vs. Load Curve Delay

(response time)

ideal

typical

offered loadSlide21

Why Does Response Time Explode?Real systems have finite limitsSuch as queue sizeWhen limits exceeded, requests are typically droppedWhich is an infinite response time, for themThere may be automatic retries (e.g., TCP), but they could be dropped, tooIf load arrives a lot faster than it is serviced, lots of stuff gets droppedUnless careful, overheads during heavy load explodeEffects like receive livelock can also hurt in this caseSlide22

Graceful DegradationWhen is a system “overloaded”?When it is no longer able to meet service goalsWhat can we do when overloaded?Continue service, but with degraded performanceMaintain performance by rejecting workResume normal service when load drops to normalWhat should we not do when overloaded?Allow throughput to drop to zero (i.e., stop doing work)Allow response time to grow without limitSlide23

Non-Preemptive SchedulingConsider in the context of CPU schedulingScheduled process runs until it yields CPUWorks well for simple systemsSmall numbers of processesWith natural producer consumer relationshipsGood for maximizing throughputDepends on each process to voluntarily yieldA piggy process can starve othersA buggy process can lock up the entire systemSlide24

When Should a Process Yield?When it knows it’s not going to make progressE.g., while waiting for I/OBetter to let someone else make progress than sit in a pointless wait loopAfter it has had its “fair share” of timeWhich is hard to defineSince it may depend on the state of everything else in the systemCan’t expect application programmers to do sophisticated things to decideSlide25

Scheduling Other Resources Non-PreemptivelySchedulers aren’t just for the CPU or coresThey also schedule use of other system resourcesDisksNetworksAt low level, bussesIs non-preemptive best for each such resource?Which algorithms we will discuss make sense for each?Slide26

Non-Preemptive Scheduling AlgorithmsFirst come first servedShortest job nextReal time schedulersSlide27

First Come First ServedThe simplest of all scheduling algorithmsRun first process on ready queue Until it completes or yieldsThen run next process on queueUntil it completes or yieldsHighly variable delaysDepends on process implementationsAll processes will eventually be servedSlide28

First Come First Served Example Note: Average is worse than total/5 because four other processes had to wait for the slow-poke who ran first.

Total

1275

595Slide29

When Would First Come First Served Work Well?FCFS scheduling is very simpleIt may deliver very poor response timeThus it makes the most sense: In batch systems, where response time is not important In embedded (e.g. telephone or set-top box) systems where computations are brief and/or exist in natural producer/consumer relationships Slide30

Shortest Job FirstFind the shortest task on ready queueRun it until it completes or yieldsFind the next shortest task on ready queueRun it until it completes or yieldsYields minimum average queuing delayThis can be very good for interactive response timeBut it penalizes longer jobsSlide31

Shortest Job First Example Note: Even though total time remained unchanged, reordering

the processes significantly reduced the average wait time

.

305

Total

1275Slide32

Is Shortest Job First Practical?How can we know how long a job is going to run?Processes predict for themselves?The system predicts for them?How fair is SJF scheduling?The smaller jobs will always be run first New small jobs cut in line, ahead of older longer jobsWill the long jobs ever run?Only if short jobs stop arriving ... which could be neverThis is called starvationIt is caused by discriminatory schedulingSlide33

What If the Prediction is Wrong?Regardless of who made itIn non-preemptive system, we have little choice:Continue running the process until it yieldsIf prediction is wrong, the purpose of Shortest-Job-First scheduling is defeatedResponse time suffers as a resultFew computer systems attempt to use Shortest-Job-First schedulingBut grocery stores and banks do use it 10-item-or-less registersSimple deposit & check cashing windowsSlide34

Real Time SchedulersFor certain systems, some things must happen at particular timesE.g., industrial control systemsIf you don’t rivet the widget before the conveyer belt moves, you have a worthless widgetThese systems must schedule on the basis of real-time deadlinesCan be either hard or softSlide35

Hard Real Time SchedulersThe system absolutely must meet its deadlinesBy definition, system fails if a deadline is not metE.g., controlling a nuclear power plant . . .How can we ensure no missed deadlines?Typically by very, very careful analysisMake sure no possible schedule causes a deadline to be missedBy working it out ahead of timeThen scheduler rigorously follows deadlinesSlide36

Ensuring Hard DeadlinesMust have deep understanding of the code used in each jobYou know exactly how long it will takeVital to avoid non-deterministic timingsEven if the non-deterministic mechanism usually speeds things upYou’re screwed if it ever slows them downTypically means you do things like turn off interruptsAnd scheduler is non-preemptiveSlide37

How Does a Hard Real Time System Schedule?There is usually a very carefully pre-defined scheduleNo actual decisions made at run timeIt’s all been worked out ahead of timeNot necessarily using any particular algorithmThe designers may have just tinkered around to make everything “fit”Slide38

Soft Real Time SchedulersHighly desirable to meet your deadlinesBut some (or any) of them can occasionally be missedGoal of scheduler is to avoid missing deadlinesWith the understanding that you mightMay have different classes of deadlinesSome “harder” than othersNeed not require quite as much analysisSlide39

Soft Real Time Schedulers and Non-PreemptionNot as vital that tasks run to completion to meet their deadlineAlso not as predictable, since you probably did less careful analysisIn particular, a new task with an earlier deadline might arriveIf you don’t pre-empt, you might not be able to meet that deadlineSlide40

What If You Don’t Meet a Deadline?Depends on the particular type of systemMight just drop the job whose deadline you missedMight allow system to fall behindMight drop some other job in the futureAt any rate, it will be well defined in each particular systemSlide41

What Algorithms Do You Use For Soft Real Time?Most common is Earliest Deadline FirstEach job has a deadline associated with itBased on a common clockKeep the job queue sorted by those deadlinesWhenever one job completes, pick the first one off the queuePerhaps prune the queue to remove jobs whose deadlines were missedMinimizes total latenessSlide42

Periodic Scheduling for Soft Real TimeMany soft real time systems have jobs coming in at predictable intervalsWith predictable deadlinesSystem must be designed so that its total amount of work doesn’t exceed capacityEven so, you might still miss deadlinesBecause those quantities represent averages, not instantaneous guaranteesSlide43

Example of a Soft Real Time SchedulerA video playing deviceFrames arriveFrom disk or network or whereverIdeally, each frame should be rendered “on time”To achieve highest user-perceived qualityIf you can’t render a frame on time, might be better to skip it entirelyRather than fall further behindSlide44

Preemptive SchedulingAgain in the context of CPU schedulingA thread or process is chosen to runIt runs until either it yieldsOr the OS decides to interrupt itAt which point some other process/thread runsTypically, the interrupted process/thread is restarted laterSlide45

Implications of Forcing PreemptionA process can be forced to yield at any timeIf a higher priority process becomes readyPerhaps as a result of an I/O completion interruptIf running process’s priority is loweredPerhaps as a result of having run for too longInterrupted process might not be in a “clean” stateWhich could complicate saving and restoring its stateEnables enforced “fair share” schedulingIntroduces gratuitous context switchesNot required by the dynamics of processes Creates potential resource sharing problemsSlide46

Implementing PreemptionNeed a way to get control away from processE.g., process makes a sys call, or clock interruptConsult scheduler before returning to processHas any ready process had its priority raised?Has any process been awakened?Has current process had its priority lowered?Scheduler finds highest priority ready processIf current process, return as usualIf not, yield on behalf of current process and switch to higher priority processSlide47

Clock InterruptsModern processors contain a clockA peripheral deviceWith limited powersCan generate an interrupt at a fixed time intervalWhich temporarily halts any running processGood way to ensure that runaway process doesn’t keep control foreverKey technology for preemptive schedulingSlide48

Round Robin Scheduling AlgorithmGoal - fair share schedulingAll processes offered equal shares of CPUAll processes experience similar queue delaysAll processes are assigned a nominal time sliceUsually the same sized slice for allEach process is scheduled in turnRuns until it blocks, or its time slice expiresThen put at the end of the process queueThen the next process is runEventually, each process reaches front of queueSlide49

Properties of Round Robin SchedulingAll processes get relatively quick chance to do some computationAt the cost of not finishing any process as quicklyA big win for interactive processesFar more context switchesWhich can be expensiveRunaway processes do relatively little harmOnly take 1/nth of the overall cyclesSlide50

Round Robin and I/O InterruptsProcesses get halted by round robin scheduling if their time slice expiresIf they block for I/O (or anything else) on their own, the scheduler doesn’t halt themThus, some percentage of the time round robin acts no differently than FIFOWhen I/O occurs in a process and it blocksSlide51

Round Robin Example Assume a 50 msec time slice (or quantum)

Dispatch Order: 0, 1, 2, 3, 4, 0, 1, 2, . . .

Process

Length

1st

2nd

3d

4th

5th

6th

7th

8th

Finish

Switches

0

350

0

250

475

650

800

950

1050

1100

7

1

125

50

300

525

525

3

2

475

100

350

550

700

850

1000

1100

1150

1275

10

1200

1250

3

250

150

400

600

750

900

900

5

4

75

200

450

475

2

4

1

3

0

1275

27

2

Average waiting time:

100

msec

First process completed:

475

msecSlide52

Comparing Example to Non-Preemptive ExamplesContext switches: 27 vs. 5 (for both FIFO and SJF)Clearly more expensiveFirst job completed: 475 msec vs. 75 (shortest job first) 350 (FIFO)Clearly takes longer to complete some processAverage waiting time: 100 msec vs.350 (shortest job first)595 (FIFO)For first opportunity to computeClearly more responsiveSlide53

Choosing a Time SlicePerformance of a preemptive scheduler depends heavily on how long time slice isLong time slices avoid too many context switchesWhich waste cyclesSo better throughput and utilizationShort time slices provide better response time to processes How to balance?Slide54

Costs of a Context SwitchEntering the OSTaking interrupt, saving registers, calling schedulerCycles to choose who to runThe scheduler/dispatcher does work to chooseMoving OS context to the new processSwitch stack, non-resident process descriptionSwitching process address spacesMap-out old process, map-in new processLosing instruction and data cachesGreatly slowing down the next hundred instructionsSlide55

Characterizing Costs of a Time Slice ChoiceWhat % of CPU use does a process get?Depends on workloadMore processes in queue = fewer slices/secondCPU share = time_slice * slices/second2% = 20ms/sec = 2ms/slice * 10 slices/sec2% = 20ms/sec = 5ms/slice * 4 slices/sec Natural rescheduling intervalWhen a typical process blocks for resources or I/OIdeally, fair-share would be based on this periodTime-slice ends only if process runs too longSlide56

Multi-queue SchedulingOne time slice length may not fit all processesCreate multiple ready queuesShort quantum (foreground) tasks that finish quickly Short but frequent time slices, optimize response timeLong quantum (background) tasks that run longer Longer but infrequent time slices, minimize overheadDifferent queues may get different shares of the CPUSlide57

How Do I Know What Queue To Put New Process Into?If it’s in the wrong queue, its scheduling discipline causes it problemsStart all processes in short quantum queueMove downwards if too many time-slice endsMove back upwards if too few time slice endsProcesses dynamically find the right queueIf you also have real time tasks, you know what belongs thereStart them in real time queue and don’t move themSlide58

Multiple Queue Scheduling tsmax = ∞

real time queue

#tse = ∞

#yield = ∞

ts

max

= 500us

short quantum queue

#tse = 10

#yield = ∞

ts

max

= 2ms

medium quantum queue

#tse = 50

#yield = 10

ts

max

= 5ms

long quantum queue

#tse =

#yield = 20

share

scheduler

20%

50%

25%

05%Slide59

Priority Scheduling AlgorithmSometimes processes aren’t all equally importantWe might want to preferentially run the more important processes firstHow would our scheduling algorithm work then?Assign each job a priority numberRun according to priority numberSlide60

Priority and PreemptionIf non-preemptive, priority scheduling is just about ordering processesMuch like shortest job first, but ordered by priority insteadBut what if scheduling is preemptive?In that case, when new process is created, it might preempt running processIf its priority is higherSlide61

Priority Scheduling Example ProcessLength

0

350

1

125

2

475

Priority

10

30

40

3

250

20

4

75

50

0

200

Process 3’s priority is lower than running process

Process 4’s priority is higher than running process

300

Process 4 completes

4

375

So we go back to process 2

550

TimeSlide62

Problems With Priority SchedulingPossible starvationCan a low priority process ever run?If not, is that really the effect we wanted?May make more sense to adjust prioritiesProcesses that have run for a long time have priority temporarily loweredProcesses that have not been able to run have priority temporarily raisedSlide63

Hard Priorities Vs. Soft PrioritiesWhat does a priority mean?That the higher priority has absolute precedence over the lower?Hard prioritiesThat’s what the example showedThat the higher priority should get a larger share of the resource than the lower?Soft prioritiesSlide64

Priority Scheduling in Linux Each process in Linux has a priorityCalled a nice valueA soft priority describing share of CPU that a process should getCommands can be run to change process prioritiesAnyone can request lower priority for his processesOnly privileged user can request higherSlide65

Priority Scheduling in Windows32 different priority levelsHalf for regular tasks, half for soft real timeReal time scheduling requires special privilegesUsing a multi-queue approachUsers can choose from 5 of these priority levelsKernel adjusts priorities based on process behaviorGoal of improving responsiveness