/
Chapter 11 I/O Management Chapter 11 I/O Management

Chapter 11 I/O Management - PowerPoint Presentation

jacey
jacey . @jacey
Follow
67 views
Uploaded On 2023-11-09

Chapter 11 I/O Management - PPT Presentation

and Disk Scheduling Eighth Edition By William Stallings Operating Systems Internals and Design Principles Categories of IO Devices External devices that engage in IO with computer systems can be grouped into three categories ID: 1030843

data disk raid memory disk data memory raid system time process requests request buffer device block head devices scheduling

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Chapter 11 I/O Management" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Chapter 11I/O Management and Disk SchedulingEighth EditionBy William StallingsOperating Systems:Internals and Design Principles

2. Categories of I/O Devices External devices that engage in I/O with computer systems can be grouped into three categories:

3. Differences in I/O DevicesDevices differ in a number of areas:

4.

5. Organization of the I/O FunctionThree techniques for performing I/O are:Programmed I/Othe processor issues an I/O command on behalf of a process to an I/O module; that process then busy waits for the operation to be completed before proceedingInterrupt-driven I/Othe processor issues an I/O command on behalf of a processif non-blocking – processor continues to execute instructions from the process that issued the I/O commandif blocking – the next instruction the processor executes is from the OS, which will put the current process in a blocked state and schedule another processDirect Memory Access (DMA)a DMA module controls the exchange of data between main memory and an I/O module

6. Table 11.1 I/O Techniques

7. Evolution of the I/O Function

8.

9.

10. Design ObjectivesEfficiencyMajor effort in I/O designImportant because I/O operations often form a bottleneckMost I/O devices are extremely slow compared with main memory and the processorThe area that has received the most attention is disk I/OGeneralityDesirable to handle all devices in a uniform mannerApplies to the way processes view I/O devices and the way the operating system manages I/O devices and operationsDiversity of devices makes it difficult to achieve true generalityUse a hierarchical, modular approach to the design of the I/O function

11. Hierarchical DesignFunctions of the operating system should be separated according to their complexity, their characteristic time scale, and their level of abstractionLeads to an organization of the operating system into a series of layersEach layer performs a related subset of the functions required of the operating systemLayers should be defined so that changes in one layer do not require changes in other layers

12.

13. BufferingPerform input transfers in advance of requests being made and perform output transfers some time after the request is made

14. No BufferWithout a buffer, the OS directly accesses the device when it needs

15. Single BufferOperating system assigns a buffer in main memory for an I/O request

16. Block-Oriented Single BufferInput transfers are made to the system bufferReading ahead/anticipated inputis done in the expectation that the block will eventually be neededwhen the transfer is complete, the process moves the block into user space and immediately requests another blockGenerally provides a speedup compared to the lack of system bufferingDisadvantages:complicates the logic in the operating systemswapping logic is also affected

17. Stream-Oriented Single BufferLine-at-a-time operationappropriate for scroll-mode terminals (dumb terminals)user input is one line at a time with a carriage return signaling the end of a lineoutput to the terminal is similarly one line at a timeByte-at-a-time operationused on forms-mode terminalswhen each keystroke is significant other peripherals such as sensors and controllers

18. Double BufferUse two system buffers instead of oneA process can transfer data to or from one buffer while the operating system empties or fills the other bufferAlso known as buffer swapping

19. Circular BufferTwo or more buffers are usedEach individual buffer is one unit in a circular bufferUsed when I/O operation must keep up with process

20.

21. The Utility of BufferingTechnique that smoothes out peaks in I/O demandwith enough demand eventually all buffers become full and their advantage is lostWhen there is a variety of I/O and process activities to service, buffering can increase the efficiency of the OS and the performance of individual processes

22. Disk Performance ParametersThe actual details of disk I/O operation depend on the:computer systemoperating systemnature of the I/O channel and disk controller hardware

23. Positioning the Read/Write HeadsWhen the disk drive is operating, the disk is rotating at constant speedTo read or write the head must be positioned at the desired track and at the beginning of the desired sector on that trackTrack selection involves moving the head in a movable-head system or electronically selecting one head on a fixed-head systemOn a movable-head system the time it takes to position the head at the track is known as seek timeThe time it takes for the beginning of the sector to reach the head is known as rotational delayThe sum of the seek time and the rotational delay equals the access time

24.

25. Table 11.2 Comparison of Disk Scheduling Algorithms

26. Processes in sequential orderFair to all processesApproximates random scheduling in performance if there are many processes competing for the diskFirst-In, First-Out (FIFO)

27. Table 11.3 Disk Scheduling Algorithms

28. Priority (PRI)Control of the scheduling is outside the control of disk management softwareGoal is not to optimize disk utilization but to meet other objectivesShort batch jobs and interactive jobs are given higher priorityProvides good interactive response timeLonger jobs may have to wait an excessively long timeA poor policy for database systems

29. Shortest ServiceTime First (SSTF)Select the disk I/O request that requires the least movement of the disk arm from its current positionAlways choose the minimum seek time

30. SCANAlso known as the elevator algorithmArm moves in one direction onlysatisfies all outstanding requests until it reaches the last track in that direction then the direction is reversedFavors jobs whose requests are for tracks nearest to both innermost and outermost tracks

31. C-SCAN(Circular SCAN)Restricts scanning to one direction onlyWhen the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again

32. N-Step-SCANSegments the disk request queue into subqueues of length NSubqueues are processed one at a time, using SCANWhile a queue is being processed new requests must be added to some other queueIf fewer than N requests are available at the end of a scan, all of them are processed with the next scan

33. FSCANUses two subqueuesWhen a scan begins, all of the requests are in one of the queues, with the other emptyDuring scan, all new requests are put into the other queueService of new requests is deferred until all of the old requests have been processed

34. RAIDRedundant Array of Independent DisksConsists of seven levels, zero through six

35. RAIDThe term was originally coined in a paper by a group of researchers at the University of California at Berkeleythe paper outlined various configurations and applications and introduced the definitions of the RAID levelsStrategy employs multiple disk drives and distributes data in such a way as to enable simultaneous access to data from multiple drivesimproves I/O performance and allows easier incremental increases in capacityThe unique contribution is to address effectively the need for redundancyMakes use of stored parity information that enables the recovery of data lost due to a disk failure

36. Table 11.4 RAID Levels N = number of data disks; m proportional to log N

37.

38.

39. RAID Level 0Not a true RAID because it does not include redundancy to improve performance or provide data protectionUser and system data are distributed across all of the disks in the arrayLogical disk is divided into strips

40. RAID Level 1Redundancy is achieved by the simple expedient of duplicating all the dataThere is no “write penalty”When a drive fails the data may still be accessed from the second drivePrincipal disadvantage is the cost

41. RAID Level 2Makes use of a parallel access techniqueData striping is usedTypically a Hamming code is usedEffective choice in an environment in which many disk errors occur

42. RAID Level 3Requires only a single redundant disk, no matter how large the disk arrayEmploys parallel access, with data distributed in small stripsCan achieve very high data transfer rates

43. RAID Level 4Makes use of an independent access techniqueA bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity diskInvolves a write penalty when an I/O write request of small size is performed

44. RAID Level 5Similar to RAID-4 but distributes the parity bits across all disksTypical allocation is a round-robin schemeHas the characteristic that the loss of any one disk does not result in data loss

45. RAID Level 6Two different parity calculations are carried out and stored in separate blocks on different disksProvides extremely high data availabilityIncurs a substantial write penalty because each write affects two parity blocks

46. Disk CacheCache memory is used to apply to a memory that is smaller and faster than main memory and that is interposed between main memory and the processorReduces average memory access time by exploiting the principle of localityDisk cache is a buffer in main memory for disk sectorsContains a copy of some of the sectors on the disk

47. Least Recently Used (LRU)Most commonly used algorithm that deals with the design issue of replacement strategyThe block that has been in the cache the longest with no reference to it is replacedA stack of pointers reference the cachemost recently referenced block is on the top of the stackwhen a block is referenced or brought into the cache, it is placed on the top of the stack

48. Least Frequently Used (LFU)The block that has experienced the fewest references is replacedA counter is associated with each blockCounter is incremented each time block is accessedWhen replacement is required, the block with the smallest count is selected

49.

50.

51.

52.

53. UNIX Buffer CacheIs essentially a disk cacheI/O operations with disk are handled through the buffer cacheThe data transfer between the buffer cache and the user process space always occurs using DMAdoes not use up any processor cycles does consume bus cyclesThree lists are maintained:Free listlist of all slots in the cache that are available for allocationDevice listlist of all buffers currently associated with each diskDriver I/O queuelist of buffers that are actually undergoing or waiting for I/O on a particular device

54.

55. Character Queue

56. Unbuffered I/OIs simply DMA between device and process spaceIs always the fastest method for a process to perform I/OProcess is locked in main memory and cannot be swapped outI/O device is tied up with the process for the duration of the transfer making it unavailable for other processes

57. Table 11.5 Device I/O in UNIX

58. Linux I/OVery similar to other UNIX implementationAssociates a special file with each I/O device driverBlock, character, and network devices are recognizedDefault disk scheduler in Linux 2.4 is the Linux Elevator

59.

60. Anticipatory I/O SchedulerElevator and deadline scheduling can be counterproductive if there are numerous synchronous read requestsIs superimposed on the deadline schedulerWhen a read request is dispatched, the anticipatory scheduler causes the scheduling system to delaythere is a good chance that the application that issued the last read request will issue another read request to the same region of the diskthat request will be serviced immediatelyotherwise the scheduler resumes using the deadline scheduling algorithm

61. Linux Page CacheFor Linux 2.4 and later there is a single unified page cache for all traffic between disk and main memoryBenefits:dirty pages can be collected and written out efficientlypages in the page cache are likely to be referenced again due to temporal locality

62.

63. Basic I/O FacilitiesNetwork DriversWindows includes integrated networking capabilities and support for remote file systemsthe facilities are implemented as software driversHardware Device Driversthe source code of Windows device drivers is portable across different processor typesCache Managermaps regions of files into kernel virtual memory and then relies on the virtual memory manager to copy pages to and from the files on diskFile System Driverssends I/O requests to the software drivers that manage the hardware device adapter

64. Asynchronous and Synchronous I/O

65. I/O CompletionWindows provides five different techniques for signaling I/O completion:

66. Windows RAID ConfigurationsWindows supports two sorts of RAID configurations:

67. Volume Shadow Copies and Volume EncryptionVolume Shadow Copiesefficient way of making consistent snapshots of volumes so they can be backed upalso useful for archiving files on a per-volume basisimplemented by a software driver that makes copies of data on the volume before it is overwrittenVolume EncryptionWindows uses BitLocker to encrypt entire volumesmore secure than encrypting individual filesallows multiple interlocking layers of security

68. SummaryI/O devicesOrganization of the I/O functionThe evolution of the I/O functionDirect memory accessOperating system design issuesDesign objectivesLogical structure of the I/O functionI/O BufferingSingle bufferDouble bufferCircular bufferDisk schedulingDisk performance parametersDisk scheduling policiesRaid Raid levels 0 – 6Disk cacheDesign and performance considerationsUNIX SVR4 I/OBuffer cacheCharacter queueUnbuffered I/OUNIX devicesLinux I/ODisk schedulingLinux page cacheWindows I/OBasic I/O facilitiesAsynchronous and Synchronous I/OSoftware RAIDVolume shadow copies/encryption