/
Chapter 4: Memory Management Chapter 4: Memory Management

Chapter 4: Memory Management - PowerPoint Presentation

trish-goza
trish-goza . @trish-goza
Follow
376 views
Uploaded On 2018-11-16

Chapter 4: Memory Management - PPT Presentation

Part 2 Paging Algorithms and Implementation Issues Chapter 4 2 CS 1550 cspittedu originaly modified by Ethan L Miller and Scott A Brandt Page replacement algorithms summary Algorithm ID: 729796

pages page processes fault page pages fault processes chapter modified 1550 pitt originaly ethan miller scott brandt memory allocation

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Chapter 4: Memory Management" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Chapter 4: Memory Management

Part 2: Paging Algorithms and Implementation IssuesSlide2

Chapter 4

2

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

Page replacement algorithms: summary

Algorithm

Comment

OPT (Optimal)

Not implementable, but useful as a benchmarkNRU (Not Recently Used)CrudeFIFO (First-In, First Out)Might throw out useful pagesSecond chanceBig improvement over FIFOClockBetter implementation of second chanceLRU (Least Recently Used)Excellent, but hard to implement exactlyNFU (Not Frequently Used)Poor approximation to LRUAgingGood approximation to LRU, efficient to implementWorking SetSomewhat expensive to implementWSClockImplementable version of Working SetSlide3

Chapter 4

3

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

Working set

Demand paging

: bring a page into memory when it

s requested by the processHow many pages are needed?Could be all of them, but not likelyInstead, processes reference a small set of pages at any given time—locality of referenceSet of pages can be different for different processes or even different times in the running of a single processSet of pages used by a process in a given interval of time is called the working setIf entire working set is in memory, no page faults!If insufficient space for working set, thrashing may occurGoal: keep most of working set in memory to minimize the number of page faults suffered by a processSlide4

Chapter 4

4

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

Local vs. global allocation policies

What is the pool of pages eligible to be replaced?

Pages belonging to the process needing a new page

All pages in the system

Local allocation: replace a page from this processMay be more “fair”: penalize processes that replace many pagesCan lead to poor performance: some processes need more pages than othersGlobal allocation: replace a page from any process14A012A18A25A310B09B13B216C012C18C25C34C4PageLast access timeA4

A4

Local

allocation

A4

Global

allocationSlide5

Chapter 4

5

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

Page fault rate vs. allocated frames

Local allocation may be more

fair

”Don’t penalize other processes for high page fault rateGlobal allocation is better for overall system performanceTake page frames from processes that don’t need them as muchReduce the overall page fault rate (even though rate for a single process may go up)Slide6

Chapter 4

6

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

Control overall page fault rate

Despite good designs, system may still thrash

IF Most (or all) processes have high page fault rate

Some processes need more memory, …

No processes need less memoryProblem: no way to reduce page fault rate (can’t give processes more pages, stealing from other processes)Solution :Reduce number of processes competing for memorySwap one or more to disk, divide up pages they heldRecall the discussion about degree of multiprogrammingSlide7

Chapter 4

7

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

Discussion: How big should a page be?

Smaller pages have advantages

Less internal fragmentation

Better fit for various data structures, code sections

Less unused physical memory (some pages have 20 useful bytes and the rest isn’t needed currently)Larger pages are better becauseLess overhead to keep track of themSmaller page tablesTLB can point to more memory (TLB size is fixed  same number of pages, but more memory per page)Faster paging algorithms (fewer table entries to look through)More efficient to transfer larger pages to and from diskSlide8

Chapter 4

8

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

Discussion: Sharing pages

Processes can share pages

Entries in page tables point to the same physical page frame

Easier to manage with code, since typically code is not modified during run time (no critical section)

Virtual addresses in different processes can be…The same: easier to exchange pointers, keep data structures consistentDifferent: may be easier to actually implementNot a problem if there are only a few shared regionsCan be very difficult if many processes share regions with each otherSlide9

Chapter 4

9

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)

When are dirty pages written to disk?

On demand (when they

re replaced)

Fewest writes to diskSlower: replacement takes twice as long (must wait for disk write and disk read)Periodically (in the background) or threshold-based (eg, if 80% of the memory is full): Cleaner processCleaner scans through page tables, writes out dirty pages that are pretty oldCleaner can also keep a list of pages that will be ready for replacement soon (not sooooo old)Page faults handled faster: no need to find space on demandMay use the same structures discussed earlier (clock, etc.)Slide10

Chapter 410

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Implementation issuesMemory allocation, page replacement occurs four times when OS involved with pagingProcess creationDetermine program sizeCreate page tableDuring process execution

Reset the MMU for new process

Flush the TLB (or reload it from saved state)

Page fault timeDetermine virtual address causing faultSwap target page out (if needed), needed page inProcess termination timeRelease page tableReturn pages to the free poolSlide11

Chapter 411

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)How is a page fault handled?Hardware causes a page faultGeneral registers saved (as on every exception)OS determines which virtual page needed

Actual fault address in a special register

Address of faulting instruction in register

Page fault was in fetching instruction, orPage fault was in fetching operands for instructionOS must figure out which…OS checks validity of addressProcess killed if address was illegalOS finds a frame to put new pageIf frame selected for replacement is dirty, write it out to diskOS requests the new page from diskPage tables updatedFaulting instruction backed up so it can be restartedFaulting process scheduledRegisters restoredProgram “continues”Slide12

Chapter 412

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Backing up an instructionProblem: page fault happens in the middle of instruction executionSome changes may have already happenedOthers may be waiting for VM to be fixedSolution: undo all of the changes made by the instructionRestart instruction from the beginning

This is easier on some architectures than others

Example: LW R1, 12(R2)

Page fault in fetching instruction: nothing to undoPage fault in getting value at 12(R2): restart instructionExample: ADD (Rd)+,(Rs1)+,(Rs2)+Page fault in writing to (Rd): may have to undo an awful lot…Slide13

Chapter 413

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Locking pages in memoryVirtual memory and I/O occasionally interactPotential problem: process P1 issues call for read from device into bufferWhile P1 is waiting for I/O (blocked), P2 runs

Imagine P2 has a page fault

P1’s I/O buffer might be chosen to be paged out (oops)

This can create a problem because an I/O device is going to write to the buffer on P1’s behalfSolution: allow some pages to be locked into memoryLocked pages are immune from being replacedPages only stay locked for (relatively) short periodsSlide14

Chapter 414

CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt)Separating policy and mechanismMechanism for page replacement has to be in kernel modeModifying page tables

Reading and writing page table entries

Policy

for deciding which pages to replace can be in user spaceMore flexibility, can tailor PRA

Userspace

Kernel

spaceUserprocess1. Page faultFaulthandlerMMUhandlerExternalpager2. Page needed3. Request page5. Here is page!4. Pagearrives6. Map in page