A simple form of attack Designed to prey on the Internets strengths Easy availability of attack machines Attack can look like normal traffic Lack of Internet enforcement tools Hard to get cooperation from others ID: 251198
Download Presentation The PPT/PDF document "Why Is DDoS Hard to Solve?" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Why Is DDoS Hard to Solve?
A simple form of attack
Designed to prey on the Internet’s strengths
Easy availability of attack machines
Attack can look like normal traffic
Lack of Internet enforcement tools
Hard to get cooperation from others
Effective solutions hard to deploySlide2
1. Simplicity Of Attack
Basically, just send someone a lot of traffic
More complicated versions can add refinements, but that’s the crux of it
No need to find new vulnerabilities
No need to worry about timing, tracing, etc.
Toolkits are readily available to allow the novice to perform
DDoS
Even distributed parts are very simpleSlide3
2.
Preys
On Internet’s Strengths
The Internet was designed to deliver lots of traffic
From lots of places, to lots of places
DDoS
attackers want to deliver lots of traffic from lots of places to one place
Any individual packet can look proper to the Internet
Without sophisticated analysis, even the entire flow can appear properSlide4
Internet Resource
Utilization
Internet was not designed to monitor resource utilization
Most of it follows first come, first served model
Many network services work the same way
And many key underlying mechanisms do, too
Thus, if a villain can get to the important resources first, he can often deny them to good usersSlide5
3.
Availability
Of Attack Machines
DDoS
is feasible because attackers can enlist many machines
Attackers can enlist many machines because many machines are readily vulnerable
Not hard to find 1,000
crackable
machines on the Internet
Particularly if you don’t care which 1,000
Botnets numbering hundreds of thousands of hosts have been discoveredSlide6
Can’t We Fix These Vulnerabilities?
DDoS
attacks don’t really harm the attacking machines
Many people don’t protect their machines even when the attacks can harm them
Why will they start protecting their machines just to help others?
Altruism has not yet proven to be a compelling argument for for network securitySlide7
4.
Attacks Resemble Normal
Traffic
A
DDoS
attack can consist of vast number of requests for a web server’s home page
No need for attacker to use particular packets or packet contents
So neat filtering/signature tools may not help
Attacker can be arbitrarily sophisticated at mirroring legitimate traffic
In principle
Not often done because dumb attacks work so wellSlide8
5. Lack Of
Enforcement
Tools
DDoS
attackers have never been caught by tracing or observing attack
Only by old-fashioned detective work
Really, only when they’re dumb enough to boast about their success
The Internet offers no help in tracing a single attack stream, much less multiple ones
Even if you trace them, a clever attacker leaves no clues of his identity on those machinesSlide9
What Is the Internet Lacking?
No validation of IP source address
No enforcement of amount of resources used
No method of tracking attack flows
Or those controlling attack flows
No method of assigning responsibility for bad packets or packet streams
No mechanism or tools for determining who corrupted a machineSlide10
6. Poor Cooperation In the Internet
It’s hard to get anyone to help you stop or trace or prevent an attack
Even your ISP might not be too cooperative
Anyone upstream of your ISP is less likely to be cooperative
ISPs more likely to cooperate with each other, though
Even if cooperation occurs, it occurs at human timescales
The attack might be over by the time you figure out who to callSlide11
7. Effective Solutions Hard To Deploy
The easiest place to deploy defensive systems is near your own machine
Defenses there might not work well (firewall example)
There are effective solutions under research
But they require deployment near attackers or in the Internet core
Or, worse, in many places
A working solution is useless without deployment
Hard to get anything deployed if deploying site
gets no direct advantageSlide12
Resource Limitations
Don’t allow an individual attack machine to use many of a target’s resources
Requires:
Authentication, or
Making the sender do special work (puzzles)
Authentication schemes are often expensive for the receiver
Existing legitimate senders largely not set up to handle doing special work
Can still be overcome with a large enough army of zombiesSlide13
Hiding From the Attacker
Make it hard for anyone but legitimate clients to deliver messages at all
E.g., keep your machine’s identity obscure
A possible solution for some potential targets
But not for others, like public web servers
To the extent that approach relies on secrecy, it’s fragile
Some such approaches don’t require secrecySlide14
Resource Multiplication
As attacker demands more resources, supply them
Essentially, never allow resources to be depleted
Not always possible, usually expensive
Not clear that defender can keep ahead of the attacker
But still a good step against limited attacks
More advanced versions might use
Akamai-like techniquesSlide15
Trace and Stop Attacks
Figure out which machines attacks come from
Go
to those machines (or near them) and stop the attacks
Tracing is trivial if IP source addresses aren’t spoofed
Tracing may be possible even if they are spoofed
May not have ability/authority to do anything once you’ve found the attack machines
Not too helpful if attacker has a vast supply of machines Slide16
Filtering Attack Streams
The basis for most defensive approaches
Addresses the core of the problem by limiting the amount of work presented to target
Key question is:
What do you drop?
Good solutions drop all (and only) attack traffic
Less good solutions drop some (or all) of everythingSlide17
Filtering Vs. Rate Limiting
Filtering drops packets with particular characteristics
If you get the characteristics right, you do little collateral damage
At odds with the desire to drop all attack traffic
Rate limiting drops packets on basis of amount of traffic
Can thus assure target is not overwhelmed
But may drop some good traffic
You can combine them (drop traffic for which you are sure is suspicious, rate-limit the rest) but you gain a littleSlide18
Where Do You Filter?
Near the target?
Near the source?
In the network core?
In multiple places?Slide19
Filtering
Location Choices
Near target
Near source
In coreSlide20
Filtering
Location Choices
Near target
Easier to detect attack
Sees everything
May be hard to prevent collateral damage
May be hard to handle attack volume
Near source
In coreSlide21
Filtering
Location Choices
Near target
Near source
May be hard to detect attack
Doesn’t see everything
Easier to prevent collateral damage
Easier to handle attack volume
In coreSlide22
Filtering
Location Choices
Near target
Near source
In core
Easier to handle attack volume
Sees everything (with sufficient deployment)
May be hard to prevent collateral damage
May be hard to detect attackSlide23
How Do You Detect Attacks?
Have database of attack signatures
Detect anomalous behavior
By measuring some parameters for a long time and setting a baseline
Detecting when their values are abnormally high
By defining which behavior must be obeyed starting from some protocol specificationSlide24
How Do You Filter?
Devise filters that encompass most of anomalous traffic
Drop everything but give priority to legitimate-looking traffic
It has some parameter values
It has certain behaviorSlide25
DDoS Defense Challenges
Need for a distributed response
Economic and social factors
Lack of detailed attack information
Lack of defense system benchmarks
Difficulty of large-scale testing
Moving targetSlide26
TCP SYN Flood
Attacker sends lots of TCP SYN packets
Victim sends an
ack
, allocates space in memory
Attacker never replies
Goal is to fill up memory before entries time out and get deleted
Usually spoofed traffic
Otherwise patterns may be used for filtering
OS at the attacker or spoofed address may send RST and free up memorySlide27
TCP SYN Cookies
Effective defense against TCP SYN flood
Victim encodes connection information and time in ACK number
Must be hard to
craft values that get encoded into the same ACK number – use crypto for encoding
Memory is only reserved when final ACK comes
Only the server must change
But TCP options are not supported
And lost SYN
ACKs
are not repeatedSlide28
Small-Packet Floods
Overwhelm routers
Create a lot of
pps
Exhaust CPU
Most routers can’t handle full bandwidth’s load of small packets
No real solution, must filter packets somehow to reduce router loadSlide29
Shrew Attack
Periodically slam the victim with short, high-volume pulses
Lead to congestion drops on client’s TCP traffic
TCP backs off
If loss is large back off to 1 MSS per RTT
Attacker slams again after a few
RTTs
Solution requires TCP protocol changes
Tough to implement since clients must be changedSlide30
Flash-Crowd Attack
Generate legitimate application traffic to the victim
E.g., DNS requests, Web requests
Usually not spoofed
If enough bots are used no client appears too aggressive
Really hard to filter since both traffic and client behavior seem identical between attackers and legitimate usersSlide31
Reflector Attack
Generate service requests to public servers spoofing the victim’s IP
Servers reply back to the victim overwhelming it
Usually done for UDP and ICMP traffic (TCP SYN flood would only overwhelm CPU if huge number of packets is generated)
Often takes advantage of
amplification effect
– some service requests lead to huge replies; this lets attacker amplify his attackSlide32
Sample Research Defenses
Pushback
Traceback
SOS
Proof-of-work
systems
Human behavior modelingSlide33
Pushback1
Goal: Preferentially drop attack traffic to relieve congestion
Local ACC: Enable core routers to respond to congestion locally by:
Profiling traffic dropped by RED
Identifying high-bandwidth aggregates
Preferentially dropping aggregate traffic to enforce desired bandwidth limit
Pushback: A router identifies the upstream neighbors that forward the aggregate traffic to it, requests that they deploy rate-limit
1
”Controlling high bandwidth aggregates in the network,”
Mahajan, Bellovin, Floyd, Paxson, Shenker, ACM CCR, July 2002Slide34
Can it Work?
Even a few core routers are able to control high-volume attacks
Separation of traffic aggregates improves current situation
Only traffic for the victim is dropped
Drops affect a portion containing the attack traffic
Likely to successfully control the attack, relieving congestion in the Internet
Will inflict collateral damage on legitimate trafficSlide35
Advantages and Limitations
Routers
can
handle high traffic volumes
Deployment at a few core routers can affect
many traffic flows, due to core topology
Simple operation, no overhead for routers
Pushback minimizes collateral damage by placing response close to the sources
Pushback only works in contiguous deployment
Collateral damage is inflicted by response, whenever attack
is
not clearly
separable
Requires
modification of existing core
routers
35Slide36
Traceback1
Goal: locate the agent machines
Each packet header may carry a mark, containing:
EdgeID
(IP addresses of the routers) specifying an edge it has traversed
The distance from the edge
Routers mark packets probabilistically
If a router detects half-marked packet (containing only one IP address) it will complete the
mark
Victim under attack reconstructs the path from the marked packets
1
“Practical network support for IP Traceback,” Savage, Wetherall, Karlin, Anderson,
ACM SIGCOMM 2000Slide37
Traceback and IP Spoofing
Traceback
does nothing to stop DDoS attacks
It only identifies attackers’ true
locations
Comes to a vicinity of attacker
If IP spoofing were not possible in the Internet,
traceback
would not be necessary
There are
other approaches to filter out spoofed trafficSlide38
Can it Work?
Incrementally deployable, a few disjoint routers can provide beneficial information
Moderate router overhead (packet modification)
A few thousand packets are needed even for long path reconstruction
Does not work well for highly distributed attacks
Path reassembly is computationally demanding, and is not 100% accurate:
Path information cannot be used for legal purposes
Routers close to the sources can efficiently block attack traffic, minimizing collateral damageSlide39
Advantages and Limitations
Incrementally deployable
Effective for non-distributed attacks and for highly overlapping attack paths
Facilitates locating routers close to the sources
Packet marking incurs overhead at routers, must be performed at slow path
Path reassembly is complex and prone to errors
Reassembly of distributed attack paths is prohibitively
expensiveSlide40
SOS1
Goal: route only “verified user” traffic to the server, drop everything else
Clients use overlay network to reach the server
Clients are authenticated at the overlay entrance
, their
packets are routed to proxies
Small set of proxies are “approved” to reach the server, all other traffic is heavily filtered out
40
1
“ SOS: Secure Overlay Services
,
” Keromytis, Misra, Rubensteain, ACM SIGCOMM 2002Slide41
SOS
User first contacts nodes that can check its legitimacy
and
let him access the overlay –
access points
An overlay node uses Chord overlay routing
protocol to send user’s packets to a
beacon
Beacon sends packets to a
secret
servlet
Secret
servlets
tunnel packets to the
firewall
Firewall only lets through packets with an IP
of a secret
servlet
Secret
servlet’s
identity has to be hidden, because
their source address is a passport for the realm
beyond the firewall
Beacons are nodes that know the identity of secret
servlets
If a node fails, other nodes can take its role
41Slide42
Can It Work?
SOS successfully protects communication with a private server:
Access points can distinguish legitimate from attack communications
Overlay protects traffic flow
Firewall drops attack packets
Redundancy in the overlay and secrecy of the path to the target provide security against
DoS
attacks on SOS
42Slide43
Advantages And Limitations
Ensures communication of “verified user”
with the victim
Resilient to overlay node failure
Resilient to
DoS
on the defense system
Does not work for public service
Traffic
routed through the overlay travels on suboptimal path
Brute
force attack on links leading to
the firewall still possible
43Slide44
Client Puzzles1
Goal: defend against connection depletion attacks
When under attack:
Server distributes small cryptographic puzzles to clients requesting service
Clients spend resources to solve the puzzles
Correct solution, submitted on time, leads to state allocation and connection establishment
Non-validated connection packets are dropped
Puzzle generation is stateless
Client cannot reuse puzzle solutions
Attacker cannot make use of intercepted packets
44
1
“Client puzzles: A cryptographic countermeasure against connection depletion attacks
,
”
Juels, Brainard, NDSS 1999Slide45
Can It Work?
Client puzzles guarantee that each client has spent a certain amount of resources
Server determines the difficulty of the puzzle
according to its resource consumption
Effectively server controls its resource consumption
Protocol is safe against replay or interception attacks
Other flooding attacks will still work
45Slide46
Advantages And Limitations
Forces the attacker to spend resources, protects server resources from depletion
Attacker can only generate a certain number
of successful
connections from one agent machine
Low overhead on server
Requires client modification
Will not work against highly distributed attacks
Will not work against bandwidth
consumption attacks (Defense By Offense paper changes this)
46Slide47
Human Behavior Modeling1
Goal: defend against
flash-crowd attacks on Web servers
Model human behavior along three dimensions
Dynamics of interaction with server (trained)
Detect aggressive clients as attackers
Semantics of interaction with server (trained)
Detect clients that browse unpopular content or use unpopular paths as attackers
Processing of visual and textual cues
Detect clients that click on invisible or uninteresting links as attackers
47
1
“Modeling Human Behavior for Defense Against Flash Crowd Attacks”,
Oikonomou
, Mirkovic 2009.Slide48
Can It Work?
Attackers can bypass detection if they
Act non-aggressively
Use each bot for just a few requests, then replace it
But this forces attacker to use many bots
Tens to hundreds of thousands
Beyond reach of most attackers
Other flooding attacks will still work
48Slide49
Advantages And Limitations
Transparent to users
Low false positives and false negatives
Requires server modification
Server must store data about each client
Will
not work against
other flooding attacks
May not
protect services where humans do not generate traffic, e.g., DNS
49Slide50
Worms
50Slide51
Viruses don’t break into your computer – they are invited by you
They cannot spread unless you run infected application or click on infected attachment
Early viruses spread onto different applications on your computer
Contemporary viruses spread as attachments through E-mail, they will mail themselves to people from your addressbook
Worms break into your computer using some vulnerability, install malicious code and move on to other machines
You don’t have to do anything to make them spread
51
Viruses vs. WormsSlide52
A program that:
Scans network for vulnerable machines
Breaks into machines by exploiting the vulnerability
Installs some piece of malicious code – backdoor, DDoS tool
Moves on
Unlike viruses
Worms don’t need any user action to spread – they spread silently and on their own
Worms don’t attach themselves onto other programs – they exist as a separate code in memory
Sometimes you may not even know your machine has been infected by a worm
52
What is a Worm?Slide53
They spread extremely fast
They are silent
Once they are out, they cannot be recalled
They usually install malicious code
They clog the network
53
Why Are Worms Dangerous?Slide54
Robert Morris, a PhD student at Cornell, was interested in network security
He created the first worm with a goal to have a program
live
on the Internet in Nov. 1988
Worm was supposed only to spread, fairly slowly
It was supposed to take just a little bit of resources so not to draw attention to itself
But things went wrong …
Worm was supposed to avoid duplicate copies by asking a computer whether it is infected
To avoid false “yes” answers, it was programmed to duplicate itself every 7
th
time it received “yes” answer
This turned out to be too much
54
First Worm Ever – Morris WormSlide55
It exploited four vulnerabilities to break in
A bug in
sendmail
A bug in finger deamon
A trusted hosts feature (/etc/.rhosts)
Password guessing
Worm was replicating at a much faster rate than anticipated
At that time Internet was small and homogeneous (SUN and VAX workstations running BSD UNIX)
It infected around 6,000 computers, one tenth of then-Internet, in a day
55
First Worm Ever – Morris WormSlide56
People quickly devised patches and distributed them (Internet was small then)
A week later all systems were patched and worm code was removed from most of them
No lasting damage was caused
Robert Morris paid
$10,000
fine, was placed on probation and did some community work
Worm exposed not only vulnerabilities in UNIX but moreover in Internet organization
Users didn’t know who to contact and report infection or where to look for patches
56
First Worm Ever – Morris WormSlide57
In response to Morris Worm DARPA formed CERT (Computer Emergency Response Team) in November 1988Users report incidents and get help in handling them from CERT
CERT publishes security advisory notes informing users of new vulnerabilities that need to be patched and how to patch them
CERT facilitates security discussions and advocates better system management practices
57
First Worm Ever – Morris WormSlide58
Spread on July 12 and 19, 2001
Exploited a vulnerability in Microsoft Internet Information Server that allows attacker to get full access to the machine (turned on by default)
Two variants – both probed random machines, one with static seed for RNG, another with random seed for RNG (CRv2)
CRv2 infected more than 359,000 computers in less than 14 hours
It doubled in size every 37 minutes
At the peak of infection more than
2,000
hosts were infected each minute
58
Code RedSlide59
59
Code Red v2Slide60
43% of infected machines were in US47% of infected machines were home computersWorm was programmed to stop spreading at midnight, then attack www1.whitehouse.gov
It had hardcoded IP address so White House was able to thwart the attack by simply changing the IP address-to-name mapping
Estimated damage ~2.6 billion
60
Code Red v2Slide61
Spread on January 25, 2003The fastest computer worm in historyIt doubled in size every 8.5 seconds.
It infected more than 90% of vulnerable hosts within 10 minutes
It infected 75,000 hosts overall
Exploited buffer overflow vulnerability in Microsoft SQL server, discovered 6 months earlier
61
Sapphire/Slammer WormSlide62
No malicious payloadThe aggressive spread had severe consequencesCreated DoS effect
It disrupted backbone operation
Airline flights were canceled
Some ATM machines failed
62
Sapphire/Slammer WormSlide63
63
Sapphire/Slammer WormSlide64
Both Slammer and Code Red 2 use random scanning
Code Red uses multiple threads that invoke TCP connection establishment through 3-way handshake – must wait for the other party to reply or for TCP timeout to expire
Slammer packs its code in single UDP packet – speed is limited by how many UDP packets can a machine send
Could we do the same trick with Code Red?
Slammer authors tried to use linear
congruential
generators to generate random addresses for scanning, but programmed it wrong
64
Why Was Slammer So Fast?Slide65
43% of infected machines were in US59% of infected machines were home computersResponse was fast – after an hour sites started filtering packets
for SQL server
port
65
Sapphire/Slammer WormSlide66
66
BGP Impact of Slammer WormSlide67
67
Stuxnet
Worm
Discovered in June/July 2010
Targets industrial equipment
Uses Windows vulnerabilities (known and new) to break in
Installs PLC (Programmable Logic Controller)
rootkit
and reprograms PLC
Without physical schematic it is impossible to tell what’s the ultimate effect
Spread via USB drives
Updates itself either by reporting to server or by exchanging code with new copy of the wormSlide68
Many worms use random scanningThis works well only if machines have very good RNGs with different seeds
Getting large initial population represents a problem
Then the infection rate skyrockets
The infection eventually reaches saturation since all machines are probing same addresses
68
Scanning Strategies
“Warhol Worms: The Potential for Very Fast Internet Plagues”, Nicholas C Weaver Slide69
69
Random ScanningSlide70
Worm can get large initial population with hitlist scanning
Assemble a list of potentially vulnerable machines prior to releasing the worm – a
hitlist
E.g., through a slow scan
When the scan finds a vulnerable machine, hitlist is divided in half and one half is communicated to this machine upon infection
This guarantees
very
fast spread – under one minute!
70
Scanning StrategiesSlide71
71
Hitlist ScanningSlide72
Worm can get prevent die-out in the end with permutation
scanning
All machines share a common pseudorandom permutation of IP address space
Machines that are infected continue scanning just after their point in the permutation
If they encounter already infected machine they will continue from a random point
Partitioned permutation
is the combination of permutation and hitlist scanning
In the beginning permutation space is halved, later scanning is simple permutation scan
72
Scanning StrategiesSlide73
73
Permutation ScanningSlide74
Worm can get behind the firewall, or notice the die-out and then switch to subnet scanning
Goes sequentially through subnet address space, trying every address
74
Scanning StrategiesSlide75
Several ways to download malicious codeFrom a central server
From the machine that performed infection
Send it along with the exploit in a single packet
75
Infection StrategiesSlide76
Three factors define worm spread:Size of vulnerable population
Prevention – patch vulnerabilities, increase heterogeneity
Rate of infection (scanning and propagation strategy)
Deploy firewalls
Distribute worm signatures
Length of infectious period
Patch vulnerabilities after the outbreak
Worm DefenseSlide77
This depends on several factors:Reaction time
Containment strategy – address blacklisting and content filtering
Deployment scenario – where is response deployed
Evaluate effect of containment 24 hours after the onset
How Well Can Containment Do?
“Internet Quarantine: Requirements for Containing Self-Propagating Code”,
Proceedings of INFOCOM 2003, D. Moore, C. Shannon, G.
Voelker
, S. SavageSlide78
How Well Can Containment Do?
Code Red
Idealized
deployment: everyone deploys
defenses after given periodSlide79
How Well Can Containment Do?
Depending on Worm Aggressiveness
Idealized
deployment: everyone deploys
defenses after given periodSlide80
How Well Can Containment Do?
Depending on Deployment PatternSlide81
Reaction time needs to be within minutes, if not secondsWe need to use content filteringWe need to have extensive deployment on
key points in the
Internet
How Well Can Containment Do?Slide82
Monitor outgoing connection attempts to new hostsWhen rate exceeds 5 per second, put the remaining requests in a queue
When number of requests in a queue exceeds 100 stop all communication
Detecting and Stopping
Worm Spread
“Implementing and testing a virus throttle”, Proceedings of Usenix Security Symposium 2003,
J. Twycross, M. WilliamsonSlide83
Detecting and Stopping
Worm SpreadSlide84
Detecting and Stopping
Worm SpreadSlide85
Organizations share alerts and worm signatures with their “friends”
Severity of alerts is increased as more infection attempts are detected
Each host has a severity threshold after which it deploys response
Alerts spread just like worm does
Must be faster to overtake worm spread
After some time of no new infection detections, alerts will be removed
Cooperative Strategies
for Worm Defense
“Cooperative Response Strategies for Large-Scale Attack Mitigation”,
Proceedings of DISCEX 2003, D. Norjiri, J. Rowe, K. LevittSlide86
As number of friends increases,
response is faster
Propagating false alarms
is
a problem
Cooperative Strategies
for Worm DefenseSlide87
Early detection would give time to react until the infection has spread
The goal of this paper is to devise techniques that detect new worms as they just start spreading
Monitoring:
Monitor and collect worm scan traffic
Observation data is very noisy so we have to filter new scans from
Old worms’ scans
Port scans by hacking toolkits
Early Worm Detection
C. C.
Zou
, W. Gong, D.
Towsley
, and L.
Gao
. "The Monitoring and Early Detection
of Internet Worms,"
IEEE/ACM Transactions on Networking
. Slide88
Detection:
Traditional anomaly detection: threshold-based
Check traffic burst (short-term or long-term).
Difficulties: False alarm rate
“Trend Detection”
Measure number of infected hosts and use it to
detect worm exponential growth trend at the beginning
Early Worm DetectionSlide89
Worms uniformly scan the InternetNo hitlists but subnet scanning is allowed
Address space scanned is IPv4
AssumptionsSlide90
Simple epidemic model:
Worm Propagation Model
Detect worm
here. Should
have exp.
spreadSlide91
Monitoring System Slide92
Provides comprehensive observation data on a worm’s activities for the early detection of the worm
Consists of :
Malware Warning Center (MWC)
Distributed monitors
Ingress scan monitors – monitor incoming traffic going to unused addresses
Egress scan monitors – monitor outgoing traffic
Monitoring System Slide93
Ingress monitors collect:
Number of scans received in
an
interval
IP addresses of infected hosts that have sent scans to the
monitors
Egress monitors collect:
Average
worm scan rate
Malware Warning Center (MWC) monitors:
Worm’s average scan
rate
Total number of scans
monitored
Number of infected hosts
observed
Monitoring System Slide94
MWC collects and aggregates reports from distributed monitors
If total number of scans
is
over a threshold for several consecutive intervals, MWC activates the
Kalman
filter
and begins to
test the hypothesis that the number of infected hosts follows exponential distribution
Worm DetectionSlide95
Population: N=360,000, Infection rate:
= 1.8/hour,
Scan rate
= 358/min, Initially infected: I
0
=10
Monitored IP space 2
20
, Monitoring interval:
= 1 minute
Code Red Simulation
Infected hosts
estimationSlide96
Population: N=100,000
Scan rate
= 4000/sec, Initially infected: I
0
=10
Monitored IP space 2
20
, Monitoring interval:
= 1 second
Slammer Simulation
Infected hosts
estimation