CS 2510

CS 2510 CS 2510 - Start

2017-07-12 57K 57 0 0

CS 2510 - Description

OS Basics, cont’d. Dynamic Memory Allocation. How does system manage memory of a single process?. View: Each process has contiguous logical address space. Dynamic Storage Management. Static (compile-time) allocation is not possible for data. ID: 569261 Download Presentation

Download Presentation

CS 2510




Download Presentation - The PPT/PDF document "CS 2510" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentations text content in CS 2510

Slide1

CS 2510

OS Basics, cont’d

Slide2

Dynamic Memory Allocation

How does system manage memory of a single process?

View: Each process has contiguous logical address space

Slide3

Dynamic Storage Management

Static (compile-time) allocation is not possible for data

Recursive procedures

Even regular procedures are hard to predict (data dependencies)

Complex data structures

Storage used inefficiently when reserved statically

Must reserve enough to handle worst case

p

tr

= allocate(x bytes)

Free(

ptr

)

Dynamic allocation can be handled in 2 ways

Stack allocation

Restricted, but simple and efficient

Heap allocation

More general, but less efficient

Harder to implement

Slide4

Stack Organization

Definition: Memory is freed in opposite order from allocation

Alloc

(A)

Alloc

(B)

Alloc

(C)

Free(C)

Free(B)

Free(A)

When is it useful?

Memory allocation and freeing are partially predictable

Allocation is hierarchical

Example

Procedure call frames

Tree traversal, expression evaluation, parsing

Slide5

Stack Implementation

Advance pointer dividing allocated and free space

Allocate: Increment

pointer

Free

: Decrement pointer

X86: Special ‘stack pointer’ register

‘SP’ (16), ‘ESP’ (32), ‘RSP’(64)

Where does this register point to?

How does the x86 allocate and free

?

Stack grows down

Advantage

Keeps all the free space contiguous

Simple and efficient to implement

Disadvantage: Not appropriate for all data structures

Slide6

Heap Organization

Definition: Allocate from random locationsMemory consists of allocated areas and free areas (or holes)When is it useful?Allocation and release and unpredictableArbitrary list structures, complex data organizationsExamples: new in C++, malloc() in CAdvantage: Works on arbitrary allocation and free patternsDisadvantage: End up with small chunks of free space

Free

Alloc

Free

Alloc

16 bytes

32 bytes

16 bytes

12 bytes

How to allocate 24 bytes?

Slide7

Fragmentation

Definition: Free memory that is too small to be usefully allocated

External: Visible to system

Internal: Visible to process (

e.g.

if allocation at some granularity

Goal

Keep number of holes small

Keep size of holes large

Stack allocation

All free space is contiguous in one large region

How do we implement heap allocations

Slide8

Heap implementation

Data Structure: Linked list of free blocksFree List tracks storage not in useAllocationChoose block large enough for requestAccording to policy criteria!Update pointers and size variableFreeAdd block to free listMerge adjacent free blocks

If (addr of new block == prev_addr + size) {Combine blocks}

Project 2!!! Project 2!!! Project 2!!! Project 2!!! Project 2!!! Project 2!!!

Slide9

x86 and Linux

Where is heap managed

?

User space or kernel?

s

yscall_brk

()

;

Expands or contracts heap

A lot like a stack

Heap grows up

Dedicated virtual address area

Allocated space then managed by heap allocator

Backed by page tables

Slide10

Best vs. First vs. Worst

Best fit

Search the whole list of each allocation

Choose bock that most closely matches size of request

Can stop searching if see exact match

First fit

Allocate first block that is large enough

Rotating first fit:

Start with next free block each time

Worst fit

Allocate largest block to request (most leftover space)

Which is best?

Slide11

Examples

Best algorithm: Depends on sequence of requests

Example: Memory contains 2 free blocks of size 20 and 15 bytes

Allocation requests: 10 then 20

Allocation requests: 8, 12, then 12

Slide12

Buddy Allocation

Fast simple allocation for blocks that are 2

n

bytes (Knuth 1968)

Allocation Restrictions

Block sizes 2

n

Represent memory units (2

min_order

) with bitmap

Allocation strategy for k bytes

Raise allocation request to nearest 2

n

Search free list for appropriate size

Recursively divide larger blocks until reach block of correct size

“Buddy” blocks remain free

Free strategy

Recursively coalesce block with buddy if buddy free

May coalesce lazily to avoid overhead

Slide13

Example

1MB of memory

Allocate: 70KB, 35KB, 80KB

Free: 70KB, 35KB

Slide14

Comparison of Allocation Strategies

Best fit

Tends to leave some very large holes, some very small ones

Disadvantage: Very small holes can’t be used easily

First fit

Tends to leave “average” size holes

Advantage: Faster than best fit

Buddy

Organizes memory to minimize external fragmentation

Leaves large chunks of free space

Faster to find hole of appropriate size

Disadvantage: Internal fragmentation when not power of 2 request

Slide15

Memory allocation in practice

Malloc

() in C:

Calls

sbrk

()

to request more contiguous memory

Add small header to each block of memory

Pointer to next free

block

or

Size

of block

Where must header be placed?

Combination of two data structures

Separate free list for each popular size

Allocation is fast, no fragmentation

Inefficient if some are empty while others have lots of free blocks

First fit on list of irregular free blocks

Combine blocks and shuffle blocks between lists

Slide16

Reclaiming Free Memory

When can

dynamically allocated memory be freed?

Easy when a chunk is only used in one place

Explicitly call free()

Hard when information is shared

Can’t be recycled until all sharers are finished

Sharing is indicated by the presence of pointers to the data

Without a pointer, can’t access data (can’t find data)

Two possible problems

Dangling pointers:

Recycle storage while its still being used

Memory leaks:

Forget to free storage even when can’t be used again

Not a problem for short lived user processes

Issue for OS and long running applications

Slide17

Reference Counts

Idea

Keep track of the number of references to each chunk of memory

When reference count reaches zero, free memory

Example

Files and hard links in Unix

Smalltalk

Objects in distributed systems

Linux Kernel

Disadvantages

Circular data structures -> memory leaks

Slide18

Garbage Collection

Idea

Storage isn’t freed explicitly (i.e. no free() operation)

Storage freed implicitly when no longer referenced

Approach

When system needs storage, examine and collect free memory

Advantages

Works with circular data structures

Makes life easier on the application programmer

Slide19

Mark and Sweep Garbage Collection

Requirements

Must be able to find all objects

Must be able to find all pointers to objects

Compiler must cooperate by marking type of data in memory.

Why?

Two Passes

Pass 1: Mark

Start with all statically allocated (

where?

) and procedure local variables (

where?

)

Mark each object

Recursively mark all objects reachable via pointer

Pass 2: Sweep

Go through

all

objects, free those not marked

Slide20

Garbage Collection in Practice

Disadvantages

Garbage collection is often expensive: 20% or more of CPU time

Difficult to implement

Languages with garbage collection

LISP (

emacs

)

Java/C#

Scripting languages

Conservative Garbage Collection

Idea: Treat all memory as pointers (what does this mean?)

Can be used for C and C++

Slide21

I/O Devices

Two primary aspects of computer system

Processing (CPU + Memory)

Input/Output

Role of Operating System

Provide a consistent interface

Simplify access to hardware devices

Implement mechanisms for interacting with devices

Allocate and manage resources

Protection

Fairness

Obtain Efficient performance

Understand performance characteristics of device

Develop policies

Slide22

I/O Subsystem

User Process

Kernel

Kernel I/O Subsystem

SCSIBus

Keyboard

Mouse

PCI Bus

GPU

Harddisk

Device

Drivers

Software

Hardware

SCSI

Bus

Keyboard

Mouse

PCI Bus

GPU

Harddisk

SCSI

Bus

Keyboard

Mouse

PCI Bus

GPU

Harddisk

Device

Controllers

Devices

Slide23

User View of I/O

User Processes cannot have direct access to devices

Manage resources fairly

Protects data from access-control violations

Protect system from crashing

OS exports higher level functions

User process performs system calls (e.g. read() and write())

Blocking vs.

Nonblocking

I/O

Blocking:

Suspends execution of process until I/O completes

Simple and easy to understand

Inefficient

Nonblocking

:

Returns from system calls immediately

Process is notified when I/O completes

Complex but better performance

Slide24

User View: Types of devices

Character-stream

Transfer one byte (character) at a time

Interface:

get() or put()

Implemented as restricted forms of read()/write()

Example: keyboard, mouse, modem, console

Block

Transfer blocks of bytes as a unit (defined by hardware)

Interface:

read() and write()

Random access:

seek()

specifies which bytes to transfer next

Example: Disks and tapes

Slide25

Kernel I/O Subsystem

I/O scheduled from pool of requests

Requests rearranged to optimize efficiency

Example: Disk requests are reordered to reduce head seeks

Buffering

Deal with different transfer rates

Adjustable transfer sizes

Fragmentation and reassembly

Copy Semantics

Can calling process reuse buffer immediately?

Caching: Avoid device accesses as much as possible

I/O is SLOW

Block devices can read ahead

Slide26

Device Drivers

Encapsulate details of device

Wide variety of I/O devices (different manufacturers and features)

Kernel I/O subsystem not aware of hardware details

Load at boot time or on demand

IOCTLs: Special UNIX system call (I/O control)

Alternative to adding a new system call

Interface between user processes and device drivers

Device specific operation

Looks like a system call, but also takes a file descriptor argument

Why?

Slide27

Device Driver: Device Configuration

Interactions directly with

Device

Controller

Special Instructions

Valid only in kernel mode

X86: In/Out instructions

No longer popular

Memory-mapped

Read and write operations in special memory regions

How are memory operations delivered to

controller

?

OS protects interfaces by not mapping memory into user processes

Some devices can map subsets of I/O space to processes

Buffer queues (i.e. network cards)

Slide28

Interacting with Device Controllers

How to know when I/O is complete?

Polling

Disadvantage:

Busy Waiting

CPU cycles wasted when I/O is slow

Often need to be careful with timing

Interrupts

Goal: Enable asynchronous events

Device signals CPU by asserting interrupt request line

CPU automatically jumps to Interrupt Service Routine

Interrupt vector: Table of ISR addresses

Indexed by interrupt number

Lower priority interrupts postponed until higher priority finished

Interrupts can nest

Disadvantage:

Interrupts “interrupt” processing

Interrupt storms

Slide29

Device Driver: Data transfer

Programmed I/O (PIO)

Initiate operation and read in every byte/word of data

Direct Memory Access (DMA)

Offload data

xfer

work to special-purpose processor

CPU configures DMA transfer

Writes DMA command block into main memory

Target addresses and

xfer

sizes

Give command block address to DMA engine

DMA engine

xfers

data from device to memory specified in command block

DMA engine raises interrupt when entire

xfer

is complete

Virtual or Physical address?


About DocSlides
DocSlides allows users to easily upload and share presentations, PDF documents, and images.Share your documents with the world , watch,share and upload any time you want. How can you benefit from using DocSlides? DocSlides consists documents from individuals and organizations on topics ranging from technology and business to travel, health, and education. Find and search for what interests you, and learn from people and more. You can also download DocSlides to read or reference later.