/
Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili

Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili - PowerPoint Presentation

luanne-stotts
luanne-stotts . @luanne-stotts
Follow
346 views
Uploaded On 2019-11-06

Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili - PPT Presentation

Memory Hierarchy Lecture notes from MKP H H Lee and S Yalamanchili Reading Sections 51 52 53 54 58 some elements 59 SRAM Value is stored on a pair of inverting gates Very fast but takes up more space than DRAM 4 to 6 transistors ID: 763945

memory cache write block cache memory block write mem hit data time access misses address rate byte cycles associative

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Memory Hierarchy Lecture notes from MKP,..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili

Reading Sections 5.1, 5.2, 5.3, 5.4, 5.8 (some elements), 5.9

SRAM: Value is stored on a pair of inverting gates Very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: Value is stored as a charge on capacitor (must be refreshed) Very small but slower than SRAM (factor of 5 to 10) Memories: Two Basic Types W o r d l i n e P a s s t r a n s i s t o r B i t l i n e B i t l i n e bar W o r d l i n e P a s s t r a n s i s t o r C a p a c i t o r B i t l i n e

Memory Technology Registers Integrated with the CPU: fastest and most expensive Static RAM (SRAM) 0.5ns – 2.5ns , $2000 – $5000 per GBDynamic RAM (DRAM)50ns – 70ns, $20 – $75 per GBMagnetic disk5ms – 20ms, $0.05 – $0.50 per GB Ideal memoryAccess time of registerCapacity and cost/GB of diskThese numbers keep changing fast!

The Memory Hierarchy registers ALU Cache Memory Memory Memory Memory Managed by the compiler Managed by the hardware Managed by the operating system Managed by the operating system Cheaper Faster Where do Solid State Disks (SSDs) fit?

Memory Hierarchy From http:// benchmarkreviews.com From http:// brightsideofnews.com Intel Sandy Bridge AMD Bulldozer From http://hexus.net Going off-chip is expensive in time and energy

Principle of Locality Programs access a small proportion of their address space at any time Temporal locality Items accessed recently are likely to be accessed again soon e.g., instructions in a loop, induction variables Spatial localityItems near those accessed recently are likely to be accessed soonE.g., sequential instruction access, array data

Locality: Example Not shown - the stack!

Taking Advantage of Locality Memory hierarchy Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memoryMain memory and virtual memory conceptCopy more recently accessed (and nearby) items from DRAM to smaller SRAM memoryCache memory attached to CPUCopy most recently accessed items from cache to registers

Cache Basic Concepts Block (aka line ): unit of copying May be multiple words If accessed data is present in upper levelHit: access satisfied by upper levelHit ratio: hits/accessesIf accessed data is absentMiss: block copied from lower levelTime taken: miss penaltyMiss ratio: misses/accesses= 1 – hit ratio

Cache Memory Cache memory The level of the memory hierarchy closest to the CPU Given accesses X 1 , …, Xn–1, Xn How do we know if the data is present?Where do we look?

Basic Principle: Address Breakdown Same address can be interpreted in more than one way 0x80080000 0x80080004 32-bit word 16 byte line 4KB page 2 2 28 12 20 Page #/Page address Byte within a page Byte in a line Word in a line Line #/address 0x80081000 Examples :

Direct Mapped Cache Location determined by address Direct mapped: only one choice (Block address) modulo (#Blocks in cache) #Blocks is a power of 2 Use low-order address bits

Tags and Valid Bits How do we know which particular block is stored in a cache location? Store block address as well as the data Actually, only need the high-order bits Called the tagWhat if there is no data in a location?Valid bit: 1 = present, 0 = not presentInitially 0 Difference?

Cache Example 8-blocks, 1 word/block, direct mapped Initial state Index V Tag Data 000 N 001 N 010 N 011 N 100 N 101 N 110 N 111 N

Cache Example Index V Tag Data 000 N 001 N 010 N 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N Word addr Binary addr Hit/miss Cache block 22 10 110 Miss 110

Cache Example Index V Tag Data 000 N 001 N 010 Y 11 Mem[11010] 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N Word addr Binary addr Hit/miss Cache block 26 11 010 Miss 010

Cache Example Index V Tag Data 000 N 001 N 010 Y 11 Mem [11010] 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N Word addr Binary addr Hit/miss Cache block 22 10 110 Hit 110 26 11 010 Hit 010

Cache Example Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 Y 11 Mem[11010] 011 Y 00 Mem[00011] 100 N 101 N 110 Y 10 Mem[10110] 111 N Word addr Binary addr Hit/miss Cache block 16 10 000 Miss 000 3 00 011 Miss 011 16 10 000 Hit 000

Cache Example Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 Y 10 Mem[10010] 011 Y 00 Mem[00011] 100 N 101 N 110 Y 10 Mem[10110] 111 N Word addr Binary addr Hit/miss Cache block 18 10 010 Miss 010

Address Subdivision

Block Size Considerations Larger blocks should reduce miss rate Due to spatial locality But in a fixed-sized cache Larger blocks  fewer of themMore competition  increased miss rateLarger blocks  pollutionLarger miss penaltyCan override benefit of reduced miss rate Early restart and critical-word-first can help

Increasing the block size tends to decrease miss rate : Performance 2 5 6 4 0 % 3 5 % 3 0 % 2 5 % 2 0 % 1 5 % 1 0 % 5 % 0 % M i s s r a t e 6 4 1 6 4 B l o c k s i z e ( b y t e s ) 1 K B 8 K B 1 6 K B 6 4 K B 2 5 6 K B Trading off temporal vs. spatial locality

Cache Misses On cache hit, CPU proceeds normally On cache miss Stall the CPU pipeline Fetch block from next level of hierarchy Instruction cache missRestart instruction fetchData cache missComplete data access IF ID MEM WB EX

Write-Through On data-write hit, could just update the block in cache But then cache and memory would be inconsistent Write-through : also update memory But makes writes take longere.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles Effective CPI = 1 + 0.1×100 = 11 Solution: write bufferHolds data waiting to be written to memoryCPU continues immediatelyOnly stalls on write if write buffer is already full

Write Through (cont.) Write buffers are used hide the latency of memory writes by overlapping writes with useful work Ensures consistency between cache contents and main memory contents at all times Write traffic can dominate performance Cache Main Memory Write buffer Check

Write-Back Alternative: On data-write hit, just update the block in cache Keep track of whether each block is dirty When a dirty block is replaced Write it back to memory Can use a write buffer to allow replacing block to be read firstStill use the write buffer to hide the latency of write operations

Write Back (cont.) Locality of writes impacts memory traffic Writes occur at the speed of a cache Complexity of cache management is increased Cache may be inconsistent with main memory : : : : : : 31 0 Mux State Bits Tag Data : : Valid or invalid dirty

Write Allocation What should happen on a write miss? Alternatives for write-through Allocate on miss: fetch the block Write around: don’t fetch the block Since programs often write a whole block before reading it (e.g., initialization)For write-backUsually fetch the block

Read hits This is what we want! Read misses Stall the CPU, fetch block from memory, deliver to cache, restart Write hits: Can replace data in cache and memory (write-through)Write the data only into the cache (write-back the cache later) Write misses:Read the entire block into the cache, then write the word… ? Summary: Hits vs. Misses

Interface Signals Cache CPU Memory Read/Write Valid Address Write Data Read Data Ready 32 32 32 Read/Write Valid Address Write Data Read Data Ready 32 128 128 Multiple cycles per access

Cache Controller FSM

Main Memory Supporting Caches Use DRAMs for main memory Fixed width (e.g., 1 word) Connected by fixed-width clocked bus Bus clock is typically slower than CPU clock Example cache block read Send address(es) to memoryTime to read a cache lineTime to transfer data to the cache

DRAM Organization Consider all of the steps a lw instruction must go through! We will use a simple model Core Transaction request sent to MC Convert to DRAM commands Commands sent to DRAM Memory Controller

Basic DRAM Organization From https:// www.sei.cmu.edu /cyber-physical/research/timing-verification/Multicore-scheduling- cont.cfm

36 DRAM Ranks Single Rank 8b 8b 8b 8b 8b 8b 8b 8b 64b Single Rank 4b 4b 4b 4b 4b 4b 4b 4b 64b 4b 4b 4b 4b 4b 4b 4b 4b Dual- Rank 8b 8b 8b 8b 8b 8b 8b 8b 64b 64b 8b 8b 8b 8b 8b 8b 8b 8b

Increasing Memory Bandwidth Example cache block read for organization a. 1 bus cycle for address transfer 15 bus cycles per DRAM access 1 bus cycle per data transfer For 4-word block, 1-word-wide DRAM Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle How about bandwidth for these organizations?

Measuring Cache Performance Components of CPU time Program execution cycles Includes cache hit time Memory stall cycles Mainly from cache misses Computer memory stall cycles

Measuring Performance These expressions themselves are an approximation Note the equivalence between the use of misses/instruction and misses/memory reference Some Example Problems Memory Stall Cycles Read Stalls Write Stalls IC * Reads/Instruction * read miss rate * miss penalty IC * writes/Instruction * write miss rate * miss penalty IC * memory references/Instruction * miss rate * miss penalty Instructions * references/instruction Data references + Instruction references

Cache Performance Example Given I-cache miss rate = 2% D-cache miss rate = 4% Miss penalty = 100 cycles Base CPI (ideal cache) = 2Load & stores are 36% of instructionsMiss cycles per instruction I-cache: 0.02 × 100 = 2D-cache: 0.36 × 0.04 × 100 = 1.44Actual CPI = 2 + 2 + 1.44 = 5.44Ideal CPU is 5.44/2 =2.72 times faster!

Average Access Time Hit time is also important for performance Average memory access time (AMAT) AMAT = Hit time + Miss rate × Miss penalty ExampleCPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5%AMAT = 1 + 0.05 × 20 = 2ns2 cycles per instruction Increase in CPI = Base CPI +Prob(event) * Penalty(event)Examples

Performance Summary When CPU performance increased Miss penalty becomes more significant Decreasing base CPI Greater proportion of time spent on memory stalls Increasing clock rateMemory stalls account for more CPU cyclesCan’t neglect cache behavior when evaluating system performance

Associative Caches Fully associative Allow a given block to go in any cache entry Requires all entries to be searched at once Comparator per entry (expensive) n-way set associativeEach set contains n entriesBlock number determines which set(Block number) modulo (#Sets in cache)Search all entries in a given set at once n comparators (less expensive)

Example: Fully Associative Cache : : : : : : Byte 31 Byte 0 Mux Tag Byte State Bits Associative Tag Store Data : :

Spectrum of Associativity For a cache with 8 entries

Associativity Example Compare 4-block caches Direct mapped, 2-way set associative, fully associative Block access sequence: 0, 8, 0, 6, 8 Direct mapped Block address Cache index Hit/miss Cache content after access 0 1 2 3 0 0 miss Mem[0] 8 0 miss Mem[8] 0 0 miss Mem[0] 6 2 miss Mem[0] Mem[6] 8 0 miss Mem[8] Mem[6]

Associativity Example 2-way set associative Block address Cache index Hit/miss Cache content after access Set 0 Set 1 0 0 miss Mem[0] 8 0 miss Mem[0] Mem[8] 0 0 hit Mem[0] Mem[8] 6 0 miss Mem[0] Mem[6] 8 0 miss Mem[8] Mem[6] Fully associative Block address Hit/miss Cache content after access 0 miss Mem[0] 8 miss Mem[0] Mem[8] 0 hit Mem[0] Mem[8] 6 miss Mem[0] Mem[8] Mem[6] 8 hit Mem[0] Mem[8] Mem[6]

How Much Associativity Increased associativity decreases miss rate But with diminishing returns Simulation of a system with 64KB D-cache, 16-word blocks, SPEC2000 1-way: 10.3%2-way: 8.6%4-way: 8.3%8-way: 8.1%

Set Associative Cache Organization search

Summary: Placement Policy Direct Mapped No choice Set Associative Any location in the set of lines Replacement policyFully AssociativeAny line in the cacheDictated by the replacement policy

Summary: Replacement Policy Direct mapped: no choice Set associative Prefer non-valid entry, if there is one Otherwise, choose among entries in the set Least-recently used (LRU)Choose the one unused for the longest timeSimple for 2-way, manageable for 4-way, too hard beyond that RandomGives approximately the same performance as LRU for high associativity

Multilevel Caches Primary cache attached to CPU Small, but fast Level-2 cache services misses from primary cache Larger, slower, but still faster than main memory Main memory services L-2 cache missesSome high-end systems include L-3 cache

Multilevel Caches (cont.) Goal: balance (fast) hits vs. (slow) misses Techniques for the former are distinct from those for the latter Goal: keep up with the processor vs. keep up with memory Level 1 Cache Main memory Level 2 Cache Example: Addressing

Multilevel Cache Example Given CPU base CPI = 1, clock rate = 4GHz Miss rate/instruction = 2% Main memory access time = 100ns With just primary cacheMiss penalty = 100ns/0.25ns = 400 cyclesEffective CPI = 1 + 0.02 × 400 = 9

Example (cont.) Now add L-2 cache Access time = 5ns Global miss rate to main memory = 0.5% Primary miss with L-2 hit Penalty = 5ns/0.25ns = 20 cyclesPrimary miss with L-2 missExtra penalty = 500 cyclesCPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4Performance ratio = 9/3.4 = 2.6

Multilevel Cache Considerations Primary cache Focus on minimal hit time L-2 cache Focus on low miss rate to avoid main memory access Hit time has less overall impactResultsL-1 cache usually smaller than a single cacheL-1 block size smaller than L-2 block size

Sources of Misses Compulsory misses (aka cold start misses) First access to a block Capacity misses Due to finite cache size A replaced block is later accessed againConflict misses (aka collision misses)In a non-fully associative cache Due to competition for entries in a setWould not occur in a fully associative cache of the same total size

Cache Design Trade-offs Design change Effect on miss rate Negative performance effect Increase cache size Decrease capacity misses May increase access time Increase associativity Decrease conflict misses May increase access time Increase block size Decrease compulsory misses Increases miss penalty. For very large block size, may increase miss rate due to pollution.

Miss Penalty Reduction Return requested word first Then back-fill rest of block Non-blocking miss processing Hit under miss: allow hits to proceed Mis under miss: allow multiple outstanding missesHardware prefetch: instructions and dataOpteron X4: bank interleaved L1 D-cacheTwo concurrent accesses per cycle

Example: Intel Sandy Bridge Sandy Bridge i5-2400 L1 I & D cache: 32K, 8-way, 64 byte line L2 unified cache: 256K, 8 way, 64 byte line L3 Shared: 6MB, 12-way 64 byte line Source: Sandy Bridge-E layout, Intel Sandy Bridge i7-970 Sandy Bridge-E can have up to 20MB L3!

Example: Intel Nehalem Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache Intel Nehalem 4-core processor

3-Level Cache Organization Intel Nehalem AMD Opteron X4 L1 caches (per core) L1 I-cache: 32KB, 64-byte blocks, 4-way, approx LRU replacement, hit time n/a L1 D-cache: 32KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a L1 I-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, hit time 3 cycles L1 D-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, write-back/allocate, hit time 9 cycles L2 unified cache (per core) 256KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a 512KB, 64-byte blocks, 16-way, approx LRU replacement, write-back/allocate, hit time n/a L3 unified cache (shared) 8MB, 64-byte blocks, 16-way, replacement n/a, write-back/allocate, hit time n/a 2MB, 64-byte blocks, 32-way, replace block shared by fewest cores, write-back/allocate, hit time 32 cycles n/a: data not available

Concluding Remarks Fast memories are small, large memories are slow We really want fast, large memories  Caching gives this illusion  Principle of localityPrograms use a small part of their memory space frequentlyMemory hierarchy L1 cache  L2 cache  …  DRAM memoryMemory system design is critical for multiprocessors

Study Guide Given a memory system description, e.g., cache and DRAM parameters, what is the breakdown of the addresses? Given the state of the memory hierarchy be able to determine the changes required on a new access. See sample problems. Given a main memory and cache architecture, be able to compute the impact on CPI. See sample problems Given the state of a cache system in a coherent shared memory architecture be able to determine the state changes when a new access is provided

Glossary Associativity Cache coherence Cache line or block Cache hit Cache missDirect mapped cache Fully associative cache Memory hierarchyMultilevel cacheMiss penaltyReplacement policySet associative cacheSpatial localitySnooping protocolTemporal localityTagWrite throughWrite back