/
Kernel Stack Desirable  for security: Kernel Stack Desirable  for security:

Kernel Stack Desirable for security: - PowerPoint Presentation

enteringmalboro
enteringmalboro . @enteringmalboro
Follow
342 views
Uploaded On 2020-08-28

Kernel Stack Desirable for security: - PPT Presentation

eg illegal parameters might be supplied Whats on the kernel stack Upon entering kernelmode tasks registers are saved on kernel stack eg address of tasks usermode stack ID: 807778

priority time 100 job time priority job 100 120 rule turnaround completion cpu currency queue runs stack scheduling response

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "Kernel Stack Desirable for security:" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Slide2

Slide3

Kernel Stack

Desirable

for security:

e.g.,

illegal parameters might be

supplied

Slide4

What’s on the kernel stack?

Upon entering kernel-mode:

task’s registers are saved on kernel stack

(e.g., address of task’s user-mode stack)

During execution of kernel functions:

Function parameters and return-addresses

Storage locations for ‘automatic’ variables

Slide5

Practice

Linux

<--

U

S

--| and <--I--<--K

S

--| 

8K per process (thread)

Separate from user

stack

PCB contains pointer to the kernel stack

Xinu

<--

I--<--K

S

--<--U

S

--|

On top of user stack

Slide6

PA0

Slide7

PA0. 0-1

0.

Step 2,

`

cs

-status | head -1 |

sed

's/://g

'

`

Step 6,

cs

-console, (control-@) OR (control-spacebar)

.

section .

data

.section

.

text

.

globl

zfunction

zfunction

:

pushl

%

ebp

movl

%

esp

, %

ebp

….

Leave

ret

Read

http

://

en.wikibooks.org/wiki/X86_Assembly/GAS_Syntax

In C, we count from 0

Slide8

PA0. 2-3

Try

“man end” and see what you can ge

t

Use “

kprint

” for

output

Read

“Print the address of the top of the

run-time stack

for whichever process you are currently in, right before and right after you get into the

printos

() function call

.” carefully

You can use in-line assembly

Use

ebp

,

esp

Slide9

Others

https://

vcl.ncsu.edu/help/files-data/where-save-my-files

Course

Workspace will also be

afs

based

Moodle forum is up, please use it

Slide10

Lecture 3

Scheduling

Slide11

Workload Assumptions

1. Each job runs for the same amount of time.

2. All jobs arrive at the same time.

3. Once started, each job runs to completion.

4. All jobs only use the CPU (i.e., they perform no I/O

).

5. The run-time of each job is known.

Slide12

Scheduling Metrics

Performance: turnaround time

T

turnaround

= T

completion

− T

arrival

As

T

arrival

is now 0,

T

turnaround

= T

completion

Slide13

First In, First Out

Work well under our assumption

Relax “

Each job runs for the same amount of time

Convoy effect

0

120

100

20

40

60

80

0

120

100

20

40

60

80

Slide14

Shortest Job First

SJF would be optimal

Relax “

All jobs arrive at the same time.

0

120

100

20

40

60

80

0

120

100

20

40

60

80

A

B

C

B/C

arrive

Slide15

Shortest

Time-to-Completion

First

STCF is preemptive, aka PSJF

Once started, each job runs to completion

” relaxed

0

120

100

20

40

60

80

A

B

C

B/C

arrive

A

Slide16

Scheduling Metrics

Performance: turnaround time

T

turnaround

= T

completion

− T

arrival

As

T

arrival

is now 0,

T

turnaround

= T

completion

Performance:

response time

T

response

=

T

firstrun − Tarrival

Slide17

T

urnaround time or response time

FIFO, SJF, or STCF

Round robin

0

120

100

20

40

60

80

0

120

100

20

40

60

80

Slide18

Conflicting criteria

Minimizing

response

time

requires

more context switches for many

processes

incur

more scheduling overhead

decrease

system

throughput

Increase turnaround time

Scheduling algorithm depends on nature of system

Batch vs. interactive

Designing a generic AND efficient scheduler is difficult

Slide19

Incorporating I/O

Poor use of resources

Overlap allows better use of resources

0

120

100

20

40

60

80

CPU

Disk

A

A

A

A

A

A

A

B

0

120

100

20

40

60

80

CPU

Disk

A

A

A

A

A

A

A

B

B

B

B

Slide20

Multi-level

feedback queue

Goal

Optimize

turnaround time

without priori knowledge

Optimize response time for interactive users

Q6

Q5

Q4

Q3

Q2

Q1

A

B

C

D

Rule 1: If Priority(A) >

Priority(B)

A

runs (B doesn’t).

Rule

2: If Priority(A) =

Priority(B)

A

& B run in RR.

Slide21

How to Change Priority

Rule

3: When a job enters the system, it is placed at the

highest priority

(the topmost queue).

Rule

4a: If a job uses up an entire time slice while running, its

priority is

reduced (i.e., it moves down one queue).

Rule

4b: If a job gives up the CPU before the time slice is up, it

stays at

the same priority level.

Slide22

Examples

0

120

100

20

40

60

80

Q2

Q1

Q0

A

A

A

B

B

A

0

120

100

20

40

60

80

Q2

Q1

Q0

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

Problems?

Slide23

Priority Boost

Rule 5: After some time period S, move all the jobs in the

system to

the topmost queue.

0

120

100

20

40

60

80

Q2

Q1

Q0

A

A

A

0

120

100

20

40

60

80

Q2

Q1

Q0

A

A

A

A

Slide24

Gaming the scheduler

0

120

100

20

40

60

80

Q2

Q1

Q0

0

120

100

20

40

60

80

Q2

Q1

Q0

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

B

A

A

B

A

A

A

B

B

A

A

B

B

A

A

B

B

A

A

B

B

A

A

B

B

Slide25

Better Accounting

Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue).

Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level.

Rule

4: Once a job uses up its time allotment at a given level (

regardless of

how many times it has given up the CPU), its priority

is reduced

(i.e., it moves down one queue).

Slide26

Tuning MLFQ And Other Issues

How

to

parameterize?

The system

administrator configures it

The users provides hints

Slide27

Workload

Assumptions

1. Each job runs for the same amount of time.

2. All jobs arrive at the same time.

3. Once started, each job runs to completion.

4. All jobs only use the CPU (i.e., they perform no I/O

).

5. The run-time of each job is known.

Slide28

MLFQ rules

Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t).

Rule

2: If Priority(A) = Priority(B), A & B run in RR.

Rule

3: When a job enters the system, it is placed at the

highest priority

(the topmost queue).

Rule

4: Once a job uses up its time allotment at a given level (

regardless of

how many times it has given up the CPU), its priority

is reduced

(i.e., it moves down one queue).

Rule

5: After some time period S, move all the jobs in the

system to

the topmost queue.

Slide29

Scheduling Metrics

Performance: turnaround time

T

turnaround

= T

completion

− T

arrival

As

T

arrival

is now 0,

T

turnaround

= T

completion

Performance

:

response time

T

response

=

Tfirstrun − TarrivalCPU utilization

ThroughputFairness

Slide30

A proportional-share

or

A fair-share scheduler

Each

job obtain a certain percentage of CPU time

.

L

ottery scheduling tickets

to

represent the share of a resource that a process

should receive

If A 75 tickets, B 25 tickets, then 75% and 25

% (probabilistically

)

63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49

49

A

B A A B A

A

A A A A B A B A A A A

A

A

higher priority => more tickets

Slide31

Lottery

Code

int

counter = 0;

Int

winner =

getrandom

(0,

totaltickets

);

node_t

*current = head;

while(current) {

counter

+= current->tickets;

if

(counter

> winner)

break

;

current

= current->next;

}

// current is the winner

Slide32

Ticket

currency

User

A 100

(global currency

)

-> 500

(A’s currency) to

A1

-> 50

(global currency)

->

500 (A’s currency) to

A2

->

50 (global currency)

User

B

100 (global currency

)

->

10 (B’s currency) to B1 -> 100 (global currency)

Slide33

More on Lottery Scheduling

Ticket transfer

Ticket inflation

Compensation ticket

How to assign tickets?

Why not Deterministic?

Slide34

Next

Holiday next Monday

Work on PA0

Reading assigned later tonight