eg illegal parameters might be supplied Whats on the kernel stack Upon entering kernelmode tasks registers are saved on kernel stack eg address of tasks usermode stack ID: 807778
Download The PPT/PDF document "Kernel Stack Desirable for security:" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Slide2Slide3Kernel Stack
Desirable
for security:
e.g.,
illegal parameters might be
supplied
Slide4What’s on the kernel stack?
Upon entering kernel-mode:
task’s registers are saved on kernel stack
(e.g., address of task’s user-mode stack)
During execution of kernel functions:
Function parameters and return-addresses
Storage locations for ‘automatic’ variables
Slide5Practice
Linux
<--
U
S
--| and <--I--<--K
S
--|
8K per process (thread)
Separate from user
stack
PCB contains pointer to the kernel stack
Xinu
<--
I--<--K
S
--<--U
S
--|
On top of user stack
Slide6PA0
Slide7PA0. 0-1
0.
Step 2,
`
cs
-status | head -1 |
sed
's/://g
'
`
Step 6,
cs
-console, (control-@) OR (control-spacebar)
.
section .
data
.section
.
text
.
globl
zfunction
zfunction
:
pushl
%
ebp
movl
%
esp
, %
ebp
….
Leave
ret
Read
http
://
en.wikibooks.org/wiki/X86_Assembly/GAS_Syntax
In C, we count from 0
Slide8PA0. 2-3
Try
“man end” and see what you can ge
t
Use “
kprint
” for
output
Read
“Print the address of the top of the
run-time stack
for whichever process you are currently in, right before and right after you get into the
printos
() function call
.” carefully
You can use in-line assembly
Use
ebp
,
esp
Slide9Others
https://
vcl.ncsu.edu/help/files-data/where-save-my-files
Course
Workspace will also be
afs
based
Moodle forum is up, please use it
Slide10Lecture 3
Scheduling
Slide11Workload Assumptions
1. Each job runs for the same amount of time.
2. All jobs arrive at the same time.
3. Once started, each job runs to completion.
4. All jobs only use the CPU (i.e., they perform no I/O
).
5. The run-time of each job is known.
Slide12Scheduling Metrics
Performance: turnaround time
T
turnaround
= T
completion
− T
arrival
As
T
arrival
is now 0,
T
turnaround
= T
completion
First In, First Out
Work well under our assumption
Relax “
Each job runs for the same amount of time
”
Convoy effect
0
120
100
20
40
60
80
0
120
100
20
40
60
80
Slide14Shortest Job First
SJF would be optimal
Relax “
All jobs arrive at the same time.
”
0
120
100
20
40
60
80
0
120
100
20
40
60
80
A
B
C
B/C
arrive
Slide15Shortest
Time-to-Completion
First
STCF is preemptive, aka PSJF
“
Once started, each job runs to completion
” relaxed
0
120
100
20
40
60
80
A
B
C
B/C
arrive
A
Slide16Scheduling Metrics
Performance: turnaround time
T
turnaround
= T
completion
− T
arrival
As
T
arrival
is now 0,
T
turnaround
= T
completion
Performance:
response time
T
response
=
T
firstrun − Tarrival
Slide17T
urnaround time or response time
FIFO, SJF, or STCF
Round robin
0
120
100
20
40
60
80
0
120
100
20
40
60
80
Slide18Conflicting criteria
Minimizing
response
time
requires
more context switches for many
processes
incur
more scheduling overhead
decrease
system
throughput
Increase turnaround time
Scheduling algorithm depends on nature of system
Batch vs. interactive
Designing a generic AND efficient scheduler is difficult
Slide19Incorporating I/O
Poor use of resources
Overlap allows better use of resources
0
120
100
20
40
60
80
CPU
Disk
A
A
A
A
A
A
A
B
0
120
100
20
40
60
80
CPU
Disk
A
A
A
A
A
A
A
B
B
B
B
Slide20Multi-level
feedback queue
Goal
Optimize
turnaround time
without priori knowledge
Optimize response time for interactive users
Q6
Q5
Q4
Q3
Q2
Q1
A
B
C
D
Rule 1: If Priority(A) >
Priority(B)
A
runs (B doesn’t).
Rule
2: If Priority(A) =
Priority(B)
A
& B run in RR.
Slide21How to Change Priority
Rule
3: When a job enters the system, it is placed at the
highest priority
(the topmost queue).
Rule
4a: If a job uses up an entire time slice while running, its
priority is
reduced (i.e., it moves down one queue).
Rule
4b: If a job gives up the CPU before the time slice is up, it
stays at
the same priority level.
Slide22Examples
0
120
100
20
40
60
80
Q2
Q1
Q0
A
A
A
B
B
A
0
120
100
20
40
60
80
Q2
Q1
Q0
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
Problems?
Slide23Priority Boost
Rule 5: After some time period S, move all the jobs in the
system to
the topmost queue.
0
120
100
20
40
60
80
Q2
Q1
Q0
A
A
A
0
120
100
20
40
60
80
Q2
Q1
Q0
A
A
A
A
Slide24Gaming the scheduler
0
120
100
20
40
60
80
Q2
Q1
Q0
0
120
100
20
40
60
80
Q2
Q1
Q0
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
A
B
A
A
A
B
B
A
A
B
B
A
A
B
B
A
A
B
B
A
A
B
B
Slide25Better Accounting
Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue).
Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level.
Rule
4: Once a job uses up its time allotment at a given level (
regardless of
how many times it has given up the CPU), its priority
is reduced
(i.e., it moves down one queue).
Slide26Tuning MLFQ And Other Issues
How
to
parameterize?
The system
administrator configures it
The users provides hints
Slide27Workload
Assumptions
1. Each job runs for the same amount of time.
2. All jobs arrive at the same time.
3. Once started, each job runs to completion.
4. All jobs only use the CPU (i.e., they perform no I/O
).
5. The run-time of each job is known.
Slide28MLFQ rules
Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t).
Rule
2: If Priority(A) = Priority(B), A & B run in RR.
Rule
3: When a job enters the system, it is placed at the
highest priority
(the topmost queue).
Rule
4: Once a job uses up its time allotment at a given level (
regardless of
how many times it has given up the CPU), its priority
is reduced
(i.e., it moves down one queue).
Rule
5: After some time period S, move all the jobs in the
system to
the topmost queue.
Slide29Scheduling Metrics
Performance: turnaround time
T
turnaround
= T
completion
− T
arrival
As
T
arrival
is now 0,
T
turnaround
= T
completion
Performance
:
response time
T
response
=
Tfirstrun − TarrivalCPU utilization
ThroughputFairness
Slide30A proportional-share
or
A fair-share scheduler
Each
job obtain a certain percentage of CPU time
.
L
ottery scheduling tickets
to
represent the share of a resource that a process
should receive
If A 75 tickets, B 25 tickets, then 75% and 25
% (probabilistically
)
63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49
49
A
B A A B A
A
A A A A B A B A A A A
A
A
higher priority => more tickets
Slide31Lottery
Code
int
counter = 0;
Int
winner =
getrandom
(0,
totaltickets
);
node_t
*current = head;
while(current) {
counter
+= current->tickets;
if
(counter
> winner)
break
;
current
= current->next;
}
// current is the winner
Slide32Ticket
currency
User
A 100
(global currency
)
-> 500
(A’s currency) to
A1
-> 50
(global currency)
->
500 (A’s currency) to
A2
->
50 (global currency)
User
B
100 (global currency
)
->
10 (B’s currency) to B1 -> 100 (global currency)
Slide33More on Lottery Scheduling
Ticket transfer
Ticket inflation
Compensation ticket
How to assign tickets?
Why not Deterministic?
Slide34Next
Holiday next Monday
Work on PA0
Reading assigned later tonight