CS161 – Design and Architecture of Computer

CS161 – Design and Architecture of Computer CS161 – Design and Architecture of Computer - Start

2017-05-29 47K 47 0 0

Description

Virtual Memory. Why Virtual memory?. Allows applications to be bigger than main memory size. Helps with multiple process management. Each process gets its own chunk of memory. Protection of processes against each other. ID: 553648 Download Presentation

Embed code:
Download Presentation

CS161 – Design and Architecture of Computer




Download Presentation - The PPT/PDF document "CS161 – Design and Architecture of Com..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentations text content in CS161 – Design and Architecture of Computer

Slide1

CS161 – Design and Architecture of Computer

Virtual Memory

Slide2

Why Virtual memory?

Allows applications to be bigger than main memory sizeHelps with multiple process managementEach process gets its own chunk of memoryProtection of processes against each otherMapping of multiple processes to memoryRelocationApplication and CPU run in virtual spaceMapping of virtual to physical space is invisible to the applicationManagement between main memory and diskMiss in main memory is a page fault or address faultBlock is a page

2

Slide3

Mapping Virtual to Physical Memory

Divide memory into equal sized “chunks” or pages (typically 4KB each)Any chunk of Virtual Memory can be assigned to any chunk of Physical Memory

3

0

Physical Memory

¥

Virtual Memory

64 MB

0

Single

Process

Stack

Heap

Static

Code

Slide4

Paged Virtual Memory

Virtual address space divided into pages Physical address space divided into pageframesPage missing in Main Memory = page faultPages not in Main Memory are on disk: swap-in/swap-outOr have never been allocatedNew page may be placed anywhere in MM (fully associative map)Dynamic address translationEffective address is virtualMust be translated to physical for every accessVirtual to physical translation through page table in Main Memory

4

Slide5

Cache vs VM

Cache Virtual MemoryBlock or Line PageMiss Page FaultBlock Size: 32-64B Page Size: 4K-16KBPlacement: Direct Mapped, Fully AssociativeN-way Set AssociativeReplacement: LRU or Random LRU approximationWrite Thru or Back Write BackHow Managed: Hardware Hardware + Software (Operating System)

5

Slide6

Handling Page Faults

A page fault is like a cache missMust find page in lower level of hierarchyIf valid bit is zero, the Physical Page Number points to a page on diskWhen OS starts new process, it creates space on disk for all the pages of the process, sets all valid bits in page table to zero, and all Physical Page Numbers to point to disk called Demand Paging - pages of the process are loaded from disk only as neededCreate “swap” space for all virtual pages on disk

6

Slide7

Performing Address Translation

VM divides memory into equal sized pagesAddress translation relocates entire pagesoffsets within the pages do not changeif page size is a power of two, the virtual address separates into two fields:(like cache index, offset fields)

7

Virtual Page Number

Page Offset

virtual address

Slide8

Mapping Virtual to Physical Address

8

Slide9

Address Translation

Want fully associative page placementHow to locate the physical page?Search impractical (too many pages)A page table is a data structure which contains the mapping of virtual pages to physical pagesThere are several different ways, all up to the operating system, to keep this data aroundEach process running in the system has its own page table

9

Slide10

Page Table and Address Translation

10

Slide11

Page table translates address

Page Table

11

Slide12

Mapping Pages to Storage

12

Slide13

Replacement and Writes

To reduce page fault rate, prefer least-recently used (LRU) replacementReference bit (aka use bit) in PTE set to 1 on access to pagePeriodically cleared to 0 by OSA page with reference bit = 0 has not been used recentlyDisk writes take millions of cyclesBlock at once, not individual locationsWrite through is impracticalUse write-backDirty bit in PTE set when page is written

13

Slide14

Optimizing VM

Page Table too big!4GB Virtual address space / 4 KB page220 page table entries. Assume 4B per entry.4MB just for Page Table of single processWith 100 process, 400MB of memory is required!Virtual Memory too slow!Requires two memory accesses. One to access page table to get the memory addressAnother to get the real data

14

Slide15

Multi-level Page Table

To reduce the size of the page table1-level page table is too expensive for large virtual address SpaceSolution: Multi-level page table, Paging page tables, etc.To create small page tables for virtual memoryThe virtual address is now split into multiple chunks to index a page table “tree”

15

Slide16

Multi-level Page Table

Virtual page number broken into fields to index each level of multi-level page table

16

Slide17

Fast Address Translation

Problem: Virtual Memory requires two memory accesses!one to translate Virtual Address into Physical Address (page table lookup)one to transfer the actual data (cache hit)But Page Table is in physical memory! => 2 main memory accesses!Observation: since there is locality in pages of data, must be locality in virtual addresses of those pages!Why not create a cache of virtual to physical address translations to make translation fast? (smaller is faster)For historical reasons, such a “page table cache” is called a Translation Lookaside Buffer, or TLB

17

Slide18

Fast Translation Using a TLB

18

Slide19

TLB Translation

19

Virtual-to-physical address translation by a TLB and how the resulting physical address is used to access the cache memory.

Slide20

TLB Misses

If page is in memoryLoad the PTE from memory and retryCould be handled in hardwareCan get complex for more complicated page table structuresOr in softwareRaise a special exception, with optimized handlerIf page is not in memory (page fault)OS handles fetching the page and updating the page tableThen restart the faulting instruction

20

Slide21

TLB Miss Handler

TLB miss indicatesPage present, but PTE not in TLBPage not presetMust recognize TLB miss before destination register overwrittenRaise exceptionHandler copies PTE from memory to TLBThen restarts instructionIf page not present, page fault will occur

21

Slide22

Page Fault Handler

Use faulting virtual address to find PTELocate page on diskChoose page to replaceIf dirty, write to disk firstRead page into memory and update page tableMake process runnable againRestart from faulting instruction

22

Slide23

TLB and Cache Interaction

If cache tag uses physical addressNeed to translate before cache lookupPhysically Indexed, Physically Tagged

Slide24

TLB and Cache Addressing

Cache reviewSet or block field indexes LUT holding tags2 steps to determine hit:Index (lookup) to find tags (using block or set bits)Compare tags to determine hitSequential connection between indexing and tag comparisonRather than waiting for address translation and then performing this two step hit process, can we overlap the translation and portions of the hit sequence?Yes if we choose page size, block size, and set/direct mapping carefully

24

Slide25

Cache Index/Tag Options

Physically indexed, physically tagged (PIPT)Wait for full address translationThen use physical address for both indexing and tag comparisonVirtually indexed, physically tagged (VIPT)Use portion of the virtual address for indexing then wait for address translation and use physical address for tag comparisonsEasiest when index portion of virtual address w/in offset (page size) address bits, otherwise aliasing may occurVirtually indexed, virtually tagged (VIVT)Use virtual address for both indexing and tagging…No TLB access unless cache missRequires invalidation of cache lines on context switch or use of process ID as part of tags

25

Slide26

Virtually Index Physically Tagged

26

Slide27

Cache & Virtual memory

27

Slide28

Summary

Virtual Memory overcomes main memory size limitationsVM supported through Page TablesMulti-level Page Tables enables smaller page tables in memoryTLB enables fast address translation

28


About DocSlides
DocSlides allows users to easily upload and share presentations, PDF documents, and images.Share your documents with the world , watch,share and upload any time you want. How can you benefit from using DocSlides? DocSlides consists documents from individuals and organizations on topics ranging from technology and business to travel, health, and education. Find and search for what interests you, and learn from people and more. You can also download DocSlides to read or reference later.