Measuring Bandwidth vin Lai Mary Bak er laik mgbak er cs
140K - views

Measuring Bandwidth vin Lai Mary Bak er laik mgbak er cs

stanfordedu Department of Computer Science Stanford Uni ersity Abstr act Accurate netw ork band width measur ement is important to ariety of netw ork applications Unf ortunately accurate band width measur ement is dif64257cult describe some curr ent

Download Pdf

Measuring Bandwidth vin Lai Mary Bak er laik mgbak er cs




Download Pdf - The PPT/PDF document "Measuring Bandwidth vin Lai Mary Bak er ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentation on theme: "Measuring Bandwidth vin Lai Mary Bak er laik mgbak er cs"— Presentation transcript:


Page 1
Measuring Bandwidth vin Lai Mary Bak er laik, mgbak er @cs.stanford.edu Department of Computer Science, Stanford Uni ersity Abstr act Accurate netw ork band width measur ement is important to ariety of netw ork applications. Unf ortunately accurate band- width measur ement is difficult. describe some curr ent band width measur ement techniques: using thr oughput, pathchar [8], and ack et air [2]. explain some of the pr oblems with these techniques, includ- ing poor accuracy poor scalability lack of statistical ob ustness, poor agility in adapting to band width changes,

lack of flexibility in deploy- ment, and inaccuracy when used on ariety of traffic types. Our solu- tions to these pr oblems include using pack et windo to adapt quickly to band width changes, Recei er Only ack et air to combine accuracy and ease of deployment, and otential Band width Filtering to incr ease accuracy Our techniques ar ar at least as accurate as pr viously used filtering algorithms, and in some situations, our techniques ar mor than 37% mor accurate. common complaint about the Internet is that it is slo Some of this slo wness is due to properties of the end

points, lik slo serv ers, ut some is due to properties of the net- ork, lik propagation delay and limited bandwidth. Prop- agation delay can be measured using widely deplo yed and well understood algorithms implemented in tools lik ping and traceroute Unfortunately tools to measure band- width are neither widely deplo yed nor well understood. This ork attempts to de elop further understanding of ho to measure bandwidth. Current bandwidth measurement techniques ha man problems: poor accurac poor scalability lack of statistical rob ustness, poor agility in adapting to bandwidth changes, lack of

fle xibility in deplo yment, and inaccurac when used on ariety of traf fic types. propose solutions to these problems and demonstrate their ef fecti eness: ack et indo use pack et windo (not the TCP win- do w) to adapt quickly to bandwidth changes. In the presence of link ailure, small windo is 144% more accurate than an infinite windo Recei er Only ack et air use Recei er Only ack et air to allo the deplo yment of special softw are at only one host while achie ving accurac within 1% of Recei er Based ack et air [12]. otential Band width Filtering use Potential Band- width

Filtering to measure bandwidth accurately in the pres- ence of ariety of pack et sizes. In such an en vironment, it is at least 37% more accurate than pre viously used filtering algorithms [5] [12]. This research is supported by gift from NTT Mobile Communications Netw ork, Inc. (NTT DoCoMo), graduate fello wship from the USENIX Association, and Sloan oundation aculty Fello wship. Our erall goal is to mak ack et air algorithms practi- cal and rob ust enough to be widely and frequently used. Our approach has been to deri simple algorithms from statisti- cally alid netw ork models and oid

heuristics. Heuristics, especially in combination, tend to be dif ficult to deb ug and xplain and lack the rob ustness to apply to di erse netw ork en vironments. The rest of the paper is or ganized as follo ws: In Section II, we present moti ation for xamining bandwidth measure- ment techniques. In Section III, we propose ays to mak ack et air algorithms rob ust and practical. In Section IV, we describe bottleneck bandwidth algorithms. In Section V, we describe our ne ack et air filtering algorithm, Poten- tial Bandwidth Filtering. In Section VI, we describe ho we simulated dif

ferent bottleneck bandwidth algorithms on a- riety of netw orks. In Section VII, we present the results of our simulations. In Section VIII, we describe our plans for further xploration of this area. Finally we conclude in Sec- tion IX with our erall observ ations about the algorithms. In this section, we describe the moti ation for xamining bandwidth measurement techniques. A. Applications Se eral applications could benefit from kno wing the bot- tleneck bandwidth of route. De elopers of netw ork proto- cols and applications need to kno the bottleneck bandwidth to judge the ef

ficienc of their protocols and applications. or xample, if an HTTP serv er is deli ering data at close to the bottleneck bandwidth, then increasing the bandwidth of that link may increase application performance. Ho we er if the bottleneck link already has plenty of bandwidth to spare, increasing its bandwidth will probably not impro applica- tion performance. Netw ork clients could dynamically choose the best serv er for an operation based on the highest bottleneck band- width. This has been suggested as ay to choose web serv er or proxy [4] [15]. In addition, accurate and timely

bandwidth measurement is useful for mobile computing. Mobile computers fre- quently ha more than one netw ork interf ace, often with ery dif ferent bandwidths (e.g. 10Mb/s Ethernet and 28 Kb/s wireless). Kno wing the bandwidth ould allo the mobile host to pick the highest bandwidth interf ace as the def ault in-
Page 2
terf ace and to de grade service gracefully when it detects that it is operating on lo bandwidth link. Another application is congestion control. TCP already implicitly measures the bandwidth of the netw ork so that it will not send pack ets aster than the netw ork can

handle, ut this has certain disadv antages described in the ne xt section. Finally we could use bandwidth information to uild mul- ticast routing trees more ef ficiently and dynamically Ideally multicast routing trees ould be uilt so that pack ets tra el along tree that minimizes duplicate pack ets and latenc while maximizing bandwidth. Currently multicast routing trees are uilt either without bandwidth information or with only static information. B. Metrics distinguish between the bottlenec bandwidth and the available bandwidth of route. The bottleneck bandwidth of route is the ideal

bandwidth of the lo west bandwidth link (the bottlenec link on that route between tw hosts. In most netw orks, as long as the route between the tw hosts re- mains the same, the bottleneck bandwidth remains the same. The bottleneck bandwidth is not af fected by other traf fic. In contrast, the ailable bandwidth of route is the maximum bandwidth at which host can transmit at gi en point in time along that route. ailable bandwidth is limited by other traf fic along that route. The question of which is the better metric can only be an- swered by the application. Some applications ant

to kno which route will gi them the minimum delay or ant to use an estimate tak en longer than fe seconds ago. or these applications bottleneck bandwidth is probably the best met- ric. Some applications are only interested in the best erage throughput. or these applications, ailable bandwidth is probably the best metric. are interested in both metrics, ut ha chosen to in es- tigate bottleneck bandwidth first because it is more stable metric and is therefore useful er longer period of time, and because it bounds the ailable bandwidth and can there- fore be used later to more accurately

compute ailable band- width (see section VIII). C. Curr ent ec hniques Gi en the importance of measuring bandwidth, it is not surprising that there are currently se eral techniques for do- ing so. Ho we er all ha dra wbacks for at least some of the applications described abo e. The most popular technique is to use throughput as an approximation of bandwidth. Throughput is the amount of data transport protocol lik TCP can transfer per unit of time. One problem with throughput is that other metrics (e.g. pack et drop rate) may ha significant ef fect on TCP throughput, while not af fecting

bandwidth. Another problem with measuring throughput is that an ap- plication throughput to host implies nothing about other transfers, en from the same application to the same host. or xample, web bro wser sending request to web serv er may xperience lo throughput because that request in olv ed running slo CGI script. The same bro wser send- ing dif ferent request could xperience high throughput be- cause the latter request did not in olv running CGI script. Correlating the throughput of dif ferent applications (lik tel- net and http) is en more inaccurate. TCP uses another technique to

estimate bandwidth. It sends more and more pack ets until one is dropped. It es- timates the bandwidth to be some where between the send- ing rate when the pack et as dropped and half that rate. This has se eral problems: 1) TCP is measuring the bot- tleneck router uf fer size in addition to the bottleneck bandwidth, 2) TCP astes netw ork resources by forcing dropped pack et and filling the router uf fers, and 3) TCP has to increase its sending rate slo wly or else it will er shoot the real bandwidth and cause massi pack et loss. The last problem is particularly acute on high bandwidth,

high la- tenc links, such as satellite connections, because TCP needs     !#"$ time to reach the maximum transmission rate. Another solution is to use pathchar [8]. As ar as we kno pathchar is unique in its ability to measure the bandwidth of ery link on path accurately while requir ing special softw are on only one host. This means it could easily be widely deplo yed. Although xcellent as testing tool, the problem with pathchar is that it is slo and can consume significant amounts of netw ork bandwidth. In par ticular pathchar runs in time

proportional to the round trip time of the netw ork and sends relati ely fix ed amount of data, re gardless of the actual bandwidth of the netw ork (see Section IV -A). If more hosts were to run pathchar its pack ets ould become significant urden on the netw ork [15]. Ev en for isolated hosts with lo bandwidth connec- tions, pathchar could consume too much bandwidth to be usable re gularly The bandwidth measurement technique we ha chosen to in estigate is called ack et air [2] (described in more de- tail in Section IV -B). The adv antages it has er the tech- niques mentioned abo

are that 1) it measures the true band- width of the netw ork (instead of throughput), 2) it does not cause pack et loss (unlik TCP), 3) it does not require man pack et round trips to ork, and 4) it does not send massi amounts of data (unlik pathchar ). On the other hand, each of those techniques currently has rob ust and prac- tical implementation and is in common use in the Internet, while ack et air does not ha such an implementation and is rarely used at all (although se eral tools implement er sion of the ack et air algorithm, including bprobe [5] and tcpanaly [11]). Here are some of the

problems with current ack et
Page 3
air algorithms and ho we propose to solv them: Not statistically ob ust Pre vious ork on filtering ack et air samples has used techniques such as adding error bars to alues and intersecting them [5], or clustering alues that are close together [12]. Such heuristics do not ha well un- derstood statistical properties and their ef fecti eness may de- pend on particular data set. Instead, we use ernel density estimator to filter data, which has well understood statistical properties [14]. In particular it mak es no assumptions about the

data, and is therefore sta- tistically rob ust (see Section IV -C). Not scalable Active ack et air implementations [5] gener ate their wn traf fic and therefore ha the same scalability problems as pathchar (see Section IV -A). ack et air algorithms do not need to do this [12]. Our pas- sive ack et air implementation uses xisting netw ork traf fic to measure bandwidth. Slo On the other hand, pre vious passive implementations are designed to analyze TCP connection after its comple- tion [12], instead of while it is occurring. Gi en the long duration of some TCP connections, this

could be too late for some of the applications mentioned abo e. Our gr adual ack et air implementation forms bandwidth estimate for ery pack et that arri es. It initially gi es inac- curate answers and then gradually con er ges to an accurate answer In this ay applications obtain an estimate as soon as it is ailable (see Section IV -E). Our results sho that ack et air can gi the correct estimate within three pack ets of the start of the connection. Not ob ust on all traffic Most passi implementations are designed to use traf fic composed of mostly lar ge pack ets. Ho we er Internet

traf fic is mix of man pack et sizes and an one flo between tw hosts may contain wide ari- ety of pack et sizes. Existing passi implementations do not account for this and thus gi inaccurate results on di erse traf fic such as mix of predominantly small pack ets and fe lar ge pack ets (see Section V). This is because the net- ork model used in those implementations does not account for potential bandwidth. designed ne ack et air filtering algorithm, called Po- tential Bandwidth Filtering (PBF), which can deli er an ac- curate answer despite ariation in pack et size and

transmis- sion rate. On mix of pack et sizes, PBF is at least 37% more accurate than the standard filtering algorithm. Not flexible to band width changes Some prior implemen- tations detect only one bandwidth er time [5]. Other im- plementations can detect multiple bandwidths, ut only those which dif fer by lar ge amount [12]. Although not as fre- quent as congestion changes, bottleneck bandwidth changes do happen because of routing changes [12] or because of mo- bility [1]. Some of the applications described abo ould lik to kno as soon as possible that the bandwidth has changed

and what the ne bandwidth is, re gardless of mag- nitude. accomplish the abo e, we propose the use of limited window of past pack ets to calculate bandwidth. This in- creases the speed at which the algorithm can adapt to ne bandwidth (the gility ), ut it lea es the results more vul- nerable to noise. belie that an increase in agility fun- damentally requires becoming more vulnerable to noise (see Section IV -E). sho that small windo is 144% more accurate than an infinite windo in the presence of link ail- ure, ut 10% less accurate in the presense of congestion. Difficult to deploy

current highly accurate ack et air al- gorithm, Recei er Based ack et air (RBPP) [12], requires that pack et timings be tak en at both the sender and recei er of those pack ets. This means that special softw are must be de- plo yed at both the sender and recei er which may not be pos- sible. Another algorithm, Sender Based ack et air (SBPP) [12] requires timings (and therefore special softw are) only on the sender Unfortunately SBPP is ar less accurate than RBPP describe ariation of RBPP called Recei er Only ack et air (R OPP), which is more accurate than SBPP (b ut less than RBPP), while only

requiring timings at the recei er (see Section IV -D). This allo ws applications to trade some accurac for ease of deplo yment. Our results sho that OPP is accurate within 1% of RBPP Not studied under contr olled conditions There ha been se eral studies of ack et air algorithms using data from the Internet. This has the adv antage of using real TCP/IP code, routers, and netw ork traf fic. Ho we er we ould lik e: 1) er ifiable and reproducible results and 2) testing under ariety of controlled conditions. esting under controlled conditions in the Internet at lar ge ould be dif

ficult, if not impossible. ercome these limitations, we use netw ork simulator (fully described in Section VI) to compare the ef fecti eness of the algorithms and modifications described abo to pre- vious ack et air implementations. In this section, we describe the models and assumptions of the algorithms for measuring bottleneck bandwidth. con- sider their accurac timeliness, and agility as well as whether the are acti or passi and whether the require measure- ments from multiple netw ork hosts. kno of tw amilies of bottleneck bandwidth algo- rithms. The first amily of

algorithms, which we call the athchar Algorithms, is used in the tools pathchar [8] and utimer [6]. The second amily of algorithms is based on the ack et air algorithm and is used in the tools bprobe [5], cprobe [5], and tcpanaly [11]. ariants of the ack et air algorithm are Sender Based ack et air (SBPP), Recei er Based ack et air (RBPP), ack et Bunch Mode (PBM) [12], and our wn Recei er Only ack et air (R OPP). An orthogonal issue for ack et air algorithms is ho the filter bandwidth samples. call the standard algorithms Measured Bandwidth Filtering (MBF) and propose our wn
Page

4
Potential Bandwidth Filtering (PBF). In addition, we de- scribe our refinements of the ack et air algorithms and their filtering methods: the use of ernel density function to in- crease statistical rob ustness, the use of gradual bandwidth calculation to increase timeliness, and the use of pack et windo to increase agility A. athc har Algorithm In this section, we analyze the time tak en and bandwidth consumed by the athchar algorithm. The program orks by sending pack ets of arying sizes and measuring their round trip time. It correlates the round trip times with the pack et

sizes to calculate bandwidth. It uses the results from earlier hops for calculations on ather hops. or more thorough description of ho and why pathchar orks, see [8]. or our purposes, all we need to kno is that the pathchar pro- gram uses an acti algorithm that sends pack ets arying in size from 64 bytes to the path MTU with stride of 32 bytes. Therefore, the number of dif ferent pack et sizes pathchar sends is      (1) or Ethernet, the MTU is 1500 bytes, so is 45. In addition, it sends pack ets per size for ery hop. In the def ault con- figuration,  It must ait for each

pack et it sends to be ackno wledged before sending the ne xt pack et. Thus, the total time for pathchar to run is    (2) where is the number of hops and is the round trip latenc from the sender to hop assume that the recei er im- mediately sends an ack in response to pack et and that the sender immediately sends out the ne xt pack et when an ack ar ri es. or 10-hop Ethernet netw ork with an erage round trip latenc of 10ms, pathchar ould run in 144 seconds. This is too slo for host to run it for ery TCP connection, or en ery 10 minutes. It can be configured to send fe wer pack ets

of each size, ut at the cost of accurac More importantly pathchar consumes considerable amounts of netw ork bandwidth. The erage bandwidth used for probing particular hop is          "!     ! " $#&%(' %$*  (3) in bytes/s, where is the round trip latenc (in seconds) across that hop. or 1-hop Ethernet netw ork with la- tenc of 1ms, the erage bandwidth consumed is 6.02Mb/s. This ould be considerable imposition on 10Mb/s Eth- ernet. arther hops ould consume less bandwidth, ut pathchar al ays has to probe closer hops before arther hops. Furthermore, the total data

transferred is  ,+   .- (4) where is the number of hops. or the 10-hop Ethernet net- ork mentioned before, pathchar sends 10 MB of data. In act, pathchar will send 10 MB of data on 10-hop netw ork re gardless of the bandwidth of the netw ork, since it only depends on the number of hops, the path MTU, and If the path MTU is high and one of the early hops is lo bandwidth netw ork link, such as 56K modem, then pathchar can consume most of the bandwidth of that link for an xtended amount of time. This means that we ould ha problems scaling pathchar usage up to lar ge num- ber of hosts. B.

ac et air The basic ack et air algorithm [9] relies on the act that if tw pack ets are queued ne xt to each other at the bottleneck link, then the will xit the link seconds apart: /1032 (5) where is the size of the second pack et and 4/.052 is the bot- tleneck bandwidth. This is the bottleneck separation (see Figure 1). Since there are no links with lo wer bandwidth than the bottleneck link do wnstream of that link, and assuming the pack ets are the same size, the second pack et will ne er catch up to the first pack et. The tw pack ets ha to be the same size because dif fer ent size

pack ets ha dif ferent v elocities. If the second pack et were smaller than the first, then its transmission delay ould al ays be less than the first pack et s. Consequently it ould pass through links aster than the first pack et and quickly eliminate the bottleneck separation. Similarly if the the first pack et is smaller then it will be aster than the sec- ond pack et and continuously gro the bottleneck separation. Assuming the bottleneck separation is constant, the tw pack ets will arri at the recei er spaced seconds apart. Since we kno we can then calculate the

bottleneck band- width: /.052 (6) This algorithm mak es se eral assumptions that may not hold in practice. or instance, it is impossible to guarantee that tw pack ets will queue ne xt to each other at the bottle- neck link. If other pack ets queue in between the tw mea- surement pack ets, then (6) becomes (/.052 "6 (7) where is the total size of the other pack ets. In addition, if other pack ets queue ahead of the first pack et when it is do wn- stream of the bottleneck link, those xtra pack ets will delay
Page 5
Sender Based Receiver/Sender Based Receiver Only Packet Send

Timings Packet Arrival Timings Ack Arrival Timings Bottleneck Link Sender Receiver R R R R R P P P P Bottleneck Separation A A A Bottleneck Separation A Ack P Packet R Router Fig. 1. This figure sho ws ho the ack et air algorithm orks. Note ho the data pack ets ha greater separation after the bottleneck link and ho this separation is maintained by the acks. The arro ws pointing to SBPP RBPP and OPP indicate what timing information must be sent from the sender and recei er for each of the algorithms. the first pack et, causing time compr ession of the tw pack ets. Similarly other

pack ets could only delay the second pack et, causing the pack ets to be time xtended ime compression can cause high estimate of the bottleneck bandwidth, while time xtension can cause lo estimate. C. ac et air iltering The main problem with the basic ack et air algorithm is ho to filter out the noise caused by time compressed and xtended pack ets. One solution ould be to tak the mean or median of all the bandwidth samples. Unfortunately the noise has little correlation to the true bandwidth, so this gi es wildly arying estimates. Pre vious ack et air research has proposed finding

the point of greatest density in the distrib ution of bandwidth es- timates. The idea is that alid samples should be closely clustered around the correct alue, while incorrect samples should not be clustered around an one alue. well kno wn method for doing this is to use histogram. Unfortunately there are se eral problems with histograms. One problem is that bin widths are fix ed, and it is dif ficult to choose an appropriate bin width without pre viously kno wing something about the distrib ution. Another problem is that bin boundaries do not respect the distrib ution. points

could lie ery close to each other on either side of bin boundary and the bin boundary ignores that relationship. Finally his- togram will report the same density for points with the same alue as points which are in the same bin, ut at opposite ends of the bin. Pre vious ack et air filtering algorithms [5] [11] ha ercome some of these problems, ut not all of them. use the ernel density estimator algorithm, which ercomes all of these problems. This algorithm is well kno wn to statis- ticians [14] [16]. use it, we first define ernel function  with the property  

(8) Then the density at an point is   (9) where is the ernel width, is the number of points within of and is the th such point. The ernel function we use is    (10) This function has the desirable properties that it gi es greater weight to samples closer to the point at which we ant to estimate density and it is simple and ast to com- pute. The ernel density estimator algorithm is kno wn to be statistically alid and we sho in section VII that it gi es accurate results. Most importantly it mak es no assumptions about the distrib ution it operates on and therefore should be just

as accurate on other data sets. D. Receiver and Sender Based ac et air Recei er Based ack et air (RBPP) and Sender Based ack et air (SBPP) (both [12]) are types of ack et air al- gorithms. The dif fer in ho the from (6) is measured. Figure sho ws the dif ference in where timing measurements must be tak en. In Recei er Based ack et air is measured at the recei er so (6) becomes /.052 (11) where and are the arri al times of the first and second pack ets, respecti ely
Page 6
If we cannot measure the arri al times at the recei er we ha to use the round trip time, which is measured

at the sender (SBPP). Equation (6) becomes /1032 (12) where and are the arri al times of the acks to the first and second pack ets, respecti ely This assumes that the re- cei er promptly sends back an ackno wledgement for both of the pack ets. ith SBPP pack ets from other hosts could in- terfere with the acks as well as the original pack ets. In both the recei er and sender based algorithms, we can apply additional filtering techniques to reject incorrect esti- mates. can detect time compression or reordering when tw pack ets ha dif ference between their transmission times greater

than the dif ference between their arri al times (for RBPP) or their round trip times (for SBPP). RBPP and SBPP are useful in dif ferent circumstances. RBPP is more accurate, ut it can be harder to deplo since it requires measurement collection at both endpoints. SBPP is easy to deplo ut its results can be highly inaccurate during congestion (see Section VII). Another dif ference is that SBPP requires that pack ets be ackno wledged (as in TCP) and that the acks be constant size and relati ely small. The acks must be constant size because ariation in ack size causes ariation in total round trip

time, which ould causes noise in the bandwidth samples. The acks must be small because as the become lar ger the band- width of the path back to the sender ould start to become the bottleneck. If the bottleneck bandwidth of the path back to the sender is much less than that from the sender (as in an asymmetric netw ork) then ack size becomes that much more important (we see this ef fect in Section VII). Finally the algorithms dif fer in the kind of traf fic the can use and the paths the can measure. SBPP relies on data pack ets flo wing ay from the measurement host and can only

measure the bandwidth of the path from the sender to the recei er RBPP can use whate er traf fic is ailable. In the usual situation of data pack ets flo wing in one direction and acks flo wing in the other RBPP can determine the band- width in both directions. Ho we er the usually small size of the acks will limit the bandwidth that can be measured (see Section V). Some applications may need the high accurac of RBPP and the ease of deplo yment of SBPP or those applications, we propose Recei er Only ack et air (R OPP). As sho wn in Figure 1, OPP only tak es timing measurements

from the re- cei er and is therefore easier to deplo than RBPP Ho we er without timing information from the sender OPP cannot filter out time compressed pack ets or reordered pack ets, as SBPP and RBPP can. On the other hand, it is much less lik ely than SBPP to ha such samples (lik RBPP) because it is not relying on round trip latenc Another limitation is that it cannot use the ne Potential Bandwidth Filtering algo- rithm described in Section V. Finallym, it has the limitation that it needs pack ets (although these can be acks) flo wing on paths to ards the measurement host and

can only determine the bandwidth of such paths. Despite these limitations, our results sho that OPP achie es accurac within 1% of RBPP (see VII). con- clude that OPP achie es the ease of deplo yment of SBPP while sacrificing little accurac It is an xcellent choice for applications needing to kno the bandwidth of paths to ards host. E. imeliness ver sus Accur acy In this section we describe the tradeof of accurac er sus timeliness in ack et air algorithms and ho we imple- mented our algorithms to tak adv antage of these tradeof fs. The ack et air algorithms described in the pre vious sec-

tions are usually implemented as running er fix ed number of pack ets or er an entire connection before pro viding an estimate. This translates into long delay before pro viding an estimate. One problem is that some applications ould prefer to ha ballpark answer sooner in addition to an ac- curate answer later Our solution is to calculate bandwidth gr adually Instead of calculating single bandwidth, we calculate ne esti- mate with ery pack et arri al. In Section VII-B, we sho that gradual algorithm can con er ge to the correct band- width within three pack ets, instead of ha ving to ait

the en- tire life of the connection. problem with the gradual ack et air algorithm is that it is slo to detect bandwidth change, i.e. it has poor agility bandwidth change may be caused by route change such as link ailing or host mobility The gradual algorithm described abo will initially detect bandwidth change as noise and stick to its initial estimate. compensate for this problem and to be able to detect multiple bandwidth changes, we use pack et windo use at most (the windo size) pack ets into the past to cal- culate the bandwidth at particular pack et. This has the ad- antage that only the

most recent and probably most rele ant samples are used to calculate bandwidth. The disadv antage of using windo is that it may reduce stability ith smaller windo ws, we are more af fected by transient conditions lik congestion, which we may detect as temporary bandwidth change, as sho wn in Section VII-B. belie this is fundamental tradeof f. ack et air algo- rithm cannot distinguish between true changes in bandwidth and persistent congestion. Ho we er gi en this fundamental limitation, our addition of windo ws to the basic ack et air algorithm enables it to distinguish bandwidth changes in

the presence of light to moderate congestion. In this section we describe pre viously unaddressed prob- lem with using the filtering algorithm described abo in
Page 7
Section IV -C. call that algorithm Measured Bandwidth Filtering (MBF) to distinguish it from our solution to the problem, Potential Bandwidth Filtering (PBF). A. The otential Bandwidth Pr oblem One problem with ack et air algorithms is that the can- not measure higher bandwidth than the bandwidth at which the sender sends. If sender sends tw pack ets of 1000 bytes each with 1ms separation, then the recei er cannot

measure higher bottleneck bandwidth than 8Mb/s, en if the true bot- tleneck bandwidth is 100Mb/s. This is fundamental prop- erty of all ack et air algorithms re gardless of ho the filter ing is done. call the bandwidth at which the sender sends tw pack ets the potential bandwidth because the measured bandwidth cannot xceed it. The problem arises when the sender sends small pack- ets, or sends pack ets slo wly or both. Then the potential bandwidth is lik ely to be lo wer than the actual bottleneck bandwidth of the path, and an measured bandwidth will be wrong. ortunately some pack ets ha

lar ge potential band- width. Most HTTP and FTP pack ets are lar ge and rapidly sent, and therefore ha high potential bandwidth. Un- fortunately it may be that not all pack ets in flo ha high potential bandwidth, and in act, it may frequently be the case that the high potential bandwidth pack ets are not the most common type of pack ets. or xample, consider someone bro wsing site using HTTP/1.1. HTTP/1.1 opens one TCP connection to site and uses that connection for all communication. The client will recei man lar ge pack ets filled with HTML pages while sending man acks and fe

medium-sized pack ets filled with HTTP requests. The outbound link will be dominated by man small pack ets, with fe medium-sized pack ets. If we used the normal MBF algorithm, we ould report the measured bandwidth of the small pack ets. disco ered this problem in our simulation of an asym- metric netw ork, where this is en more of problem. On an asymmetric netw ork with high bandwidth inbound link and lo bandwidth outbound link, an inbound data transfer will fill the outbound link with acks at pack et/second rate that is lik ely to xceed that of an outbound data pack ets. B. The

otential Bandwidth iltering Solution The general idea of Potential Bandwidth Filtering is that we should correlate the potential bandwidth and measured bandwidth of sample in deciding ho to filter Samples with the same potential bandwidth and measured bandwidth are not particularly informati because the actual bandwidth could be much higher Samples with high measured band- width and lo potential bandwidth are time compressed and should be filtered out. Samples with high potential band- width and lo measured bandwidth are the most informati because the are lik ely to indicate the

true bandwidth. Measured Bandwidth Potential Bandwidth x = y y = b knee at x = b Bandwidth Samples Fig. 2. This graph sho ws ho PBF orks. The dots represent bandwidth samples plotted using their potential and measured bandwidth. All sam- ples abo the  line are filtered out. Notice ho there is knee in the samples. implement the algorithm by plotting all the samples on graph with potential bandwidth on the x-axis and measured bandwidth on the y-axis. An xample is sho wn in Figure 2. ould xpect that in the absence of congestion, the samples ould all along the line until some point These

samples ha potential bandwidth approxi- mately equal to measured bandwidth. The pack ets that gen- erated these samples did not queue behind each other at the bottleneck link. After the samples should run along the line These are the samples with higher potential band- width than measured bandwidth. The pack ets that generated these samples did queue behind each other at the bottleneck link. The alue is the actual bandwidth. If the samples ne er di ert from the line then we kno that our samples had insuf ficient potential bandwidth. or xample, this ould be case if we only had the samples

to the left of in Figure 2.In this case, we should try an acti algorithm. compensate for noise, we fit the and lines to the data and compute the relati error for each point as the distance of that point to the nearest line di vided by the x-v alue of that point. This ensures that errors when is lar ge do not dominate the calculation. then sum the errors for all the points and attempt to minimize the sum to choose the optimal Our results sho that PBF is just as accurate as MBF on an Ethernet netw ork and 37% to 435% more accurate than MBF on an asymmetric netw ork (see Section VII-C).

belie
Page 8
that PBF is essential to the practical use of passi ack et air algorithms. In this section, we discuss why we use netw ork simula- tor ho we simulated the netw ork and why we belie the results are alid. use netw ork simulator because 1) we ant erifiable and reproducible results, 2) we ant to test the algorithms in ariety of conditions, and 3) we belie the limitations of current simulator technology ha limited and accountable ef fects on our xperiments. discuss this final point in Section VI-B. A. Simulator Goals and Setup In this section, we describe our

goals for the simulator and ho we configure it to meet those goals. Our goal for the simulation is to stress the algorithms in both optimal and pathological conditions. ant to kno ho the orst possible conditions af fect these algorithms. The bottleneck bandwidth algorithms are af fected by the fol- lo wing conditions: 1. Lac of Queueing at Bottlenec Link This destro ys the causality between pack et arri al times and the bottleneck bandwidth. 2. Queueing after Bottlenec Link This destro ys the causal- ity between pack et arri al times and the bottleneck band- width. 3. ac et Loss This

causes algorithms to tak longer to con- er ge. 4. Changing Bottlenec Bandwidth Some algorithms detect this aster than others. 5. Asymmetric Bandwidth This could cause algorithms that assume symmetric bandwidth paths to ail. In particular TCP pack ets arri ving through high bandwidth do wnlink will cause man acks to xit the lo bandwidth uplink. These lo potential bandwidth acks may cause MBF algorithms to ail (see Section -A). model these conditions in controlled manner we used the ns netw ork simulator [10]. generated an 87 node net- ork using the tiers topology generator [3]. tiers gen-

erates netw ork that reflects the semi-hierarchical topology of the Internet. The topology consists of ide Area Net- ork (W AN) nodes, 16 Metropolitan Area Netw ork (MAN) nodes, and 67 Local Area Netw ork (LAN) nodes and in- cludes redundant links between dif ferent MAN nodes and LAN nodes. The client is usually hops from the serv er and sometimes as man as 14 hops ay depending on which links ha ailed. The traf fic measured is one TCP connection from the client to the serv er be ginning at 0.5 seconds into the simula- tion. The client and serv er are on dif ferent LANs and MANs.

ABLE Link ype Bandwidth Latenc Cable Modem Uplink 500Kb/s 3ms Cable Modem Do wnlink 10Mb/s 3ms Ethernet 10Mb/s 3ms ABLE II FFI FFI FF Size Burst Idle Shape 1500 bytes 1000ms 500ms 1.5 576 bytes 500ms 1000ms 1.5 41 bytes 50ms 1000ms 1.5 The simulation runs for 30 seconds of simulation time. The dif ferent link characteristics are summarized in able III. aried three simulation parameters: client connecti v- ity congestion, and link ailure model. used the tw client connections listed in able I. Only the client is con- nected to the netw ork using one of the client connections. All other nodes use

links described in able III. created congestion by placing three traf fic sources at each LAN node. Each source sends data according to areto distrib ution [7]. The parameters for these traf fic sources are summarized in able II. aried congestion by using erage data rates of 0Kb/s, 400Kb/s, and 1Mb/s. The ariety of le els of congestion allo ws us to xplore situ- ations where the pack ets to and from the client did not queue together at the bottleneck link and/or did queue after the bot- tleneck link. aried the link ailure model by using either no ailure or deterministic ailure

model where selected links along the path from client to serv er ail at specific times. The first ABLE III FA ype From Modeling BW Latenc AN AN T3 44Mb/s 40ms AN MAN Ethernet 10Mb/s 20ms MAN MAN Ethernet 10Mb/s 10ms MAN LAN Ethernet 10Mb/s 10ms LAN LAN Ethernet 10Mb/s 5ms AN MAN T1 1.5Mb/s 20ms MAN LAN T1 1.5Mb/s 20ms
Page 9
link ails for 5.0 seconds be ginning at 10.0 seconds. The sec- ond link ails for 6.0 seconds at 20.0 seconds. chose the follo wing tw links for ailure: client LAN to client MAN, and AN to serv er MAN. Link ailures cause pack ets to be lost, and when

combined with redundant links, create the possibility for multiple paths, asymmetric bandwidth, and changing bottleneck bandwidth. B. Simulator alidity belie that the limitations of current simulator tech- nology ha limited ef fect on our results. Although the Inter net xhibits ef fects that no current simulator can reproduce [13], our results do not depend on ha ving high fidelity Our goal is not to determine precisely ho well these algorithms perform in the Internet on erage. Our goal is to compare ho well these algorithms perform under certain conditions kno wn to xist in the

Internet. In this section we present the simulation results. The fol- lo wing tables sho the accurac of ack et air algorithms and ho the react to changes in netw ork conditions. use gradual ersions of all the algorithms described in Section IV, so we compute bandwidth estimate for ery pack et ar ri al. then calculate the dif ference from the estimate to the real bandwidth at that point in time (the real bandwidth aries in some of the simulations). xpress this dif ference as ratio of the error to the actual bandwidth. The tables sho the mean of these ratios. or xample, 0.10 mean error indicates

that the algorithm estimate de viated by an erage of 10% from the actual bandwidth. In the tables sho wn later the Alg. column describes which algorithm we are using: Sender Based (SB), Recei er Based (RB), and Recei er Only (R O). The Filter column describes the filtering algorithm used. The BW column lists the actual bottleneck bandwidth of the route between sender and re- cei er The ail column lists whether links ail in the simula- tion. The column gi es the size of the pack et windo The raf fic column gi es the amount of xtra traf fic simulation. The Mean, Med., and Max.

columns describe the mean, standard de viation, median, and maximum, respecti ely of the ratio of the estimate error to actual bandwidth. or the graph, we plot the bandwidth measured against elapsed time in the flo collect measurements at ery pack et arri al. ack et arri als are not enly distrib uted in time, so the points are not enly distrib uted along the x- axis. In each graph we also plot the actual bandwidth so we can gauge the accurac of each algorithm. Note that all graphs in this section plot bandwidth on log scale starting at 10,000 b/s rather than b/s. A. Receiver Only Measur

ements In this section we compare the accurac of Sender Based ack et air Recei er Based ack et air and the ne Re- ABLE IV Alg. raf fic Mean Med. Max. SB 0Kb/s 0.001 0.026 0.000 0.998 RB 0Kb/s 0.001 0.028 0.000 0.998 0Kb/s 0.001 0.035 0.000 0.998 SB 400Kb/s 12.204 12.264 1.000 25.667 400Kb/s 0.009 0.051 0.006 0.998 RB 400Kb/s 0.008 0.034 0.005 0.998 ABLE Alg. ail raf fic Mean Med. Max. RB 400Kb/s 0.009 0.051 0.006 0.998 RB 128 400Kb/s 0.005 0.037 0.000 0.998 RB 32 400Kb/s 0.110 0.252 0.000 0.998 RB 0Kb/s 1.705 2.598 0.000 5.667 RB 128 0Kb/s 0.602 1.463 0.000 5.667 RB 32 0Kb/s 0.263

0.794 0.000 5.667 cei er Only ack et air configure the simulator to use Ethernet as the client technology and use either no conges- tion or 400Kb/s of congestion. The results are summarized in able IV. ith no con- gestion, all of the ack et air algorithms ha less than 1% error ith 400Kb/s of congestion, the 1200% error of SBPP is probably too much for most applications, while the error of RBPP and OPP are still less than 1%. These results confirm our assertion in Section IV -D that OPP can achie an accurac close to that of RBPP while maintaining the ease of deplo yment of SBPP B.

Cong estion oler ance and Detecting Bandwidth Chang In this section, we xplore ho well RBPP tolerates con- gestion and detects bandwidth changes. use RBPP be- cause it is more accurate than SBPP and OPP and we anted to isolate the ef fects of our ne algorithms. en- able RBPP to detect bandwidth changes by setting pack et windo size of less than The question is whether our assertions in Section IV -E are accurate that lar ger windo size will be more resistant to congestion and smaller win- do size will adapt more quickly to bandwidth changes. The results indicate that this assertion is correct.

able summarizes the statistics. The first three lines sho the accurac of three dif ferent windo sizes when x- periencing moderate congestion. As xpected, smaller win- do alues gi less accurate results. Ho we er en the 11% erage error of the  estimate is probably tolerable for
Page 10
10000 100000 1e+06 1e+07 10 15 20 25 30 Bandwidth (b/s) (log scale) Elapsed time (seconds) Packet Pair with varying Window Size with Bandwidth Change Actual BW w = 128 w = 32 Fig. 3. This graph sho ws the ef fect of arying windo size (w) in an Ethernet client simulation with arying bandwidth and

no congestion. Notice the change in actual bandwidth at 10, 15, 20 and 26 seconds. man applications. The ne xt three lines sho ho dif ferent windo sizes af- fect agility test the agility of smaller windo sizes in adapting to changing bandwidth, we configure the simulator to shut do wn the primary links periodically and route traf fic through lo wer bandwidth secondary links (see Section VI). When we use all the pack ets from the be ginning of the con- nection (i.e.  ), RBPP has significant error As we ould xpect, the error decreases as we decrease the windo size. The

estimate with   is 144% more accurate than the   estimate. visualize the ef fect of the dif ferent windo sizes, we plotted the estimated and actual bandwidth in Figure 3. No- tice the changes in actual bandwidth (the thin solid line) at 10, 15, 20 and 26 seconds. The actual bandwidth be gins at 10Mb/s (the bottleneck bandwidth of the primary route), dips to 1.5Mb/s (the bottleneck bandwidth of the secondary route) at 10 seconds, rises again to 10Mb/s at 15 seconds and switches again between these alues at 20 and 26 seconds. remo ed the   plot from this graph because it al- ays

remains at 10Mb/s and obscures the other plots. Notice that all the estimates jump to the correct estimate within three pack ets of the start of the connection. The plot with   adapts to the change at  almost instantly while the  plot is slo wer to adapt. At the  change, the   plot is again more agile than the   plot. Strangely neither plot adapts to the  change. Ex- amination of the trace re ealed that the TCP code in ns as not increasing its windo as it should ha e. Therefore, it as sending pack ets with potential bandwidth of only 1.5Mb/s. ABLE VI FF FI

Alg. Filter BW Mean Med. Max. SB MBF 10Mb/s 0.001 0.000 0.020 0.998 SB PBF 10Mb/s 0.001 0.000 0.020 0.998 RB MBF 10Mb/s 0.001 0.000 0.020 0.998 RB PBF 10Mb/s 0.001 0.000 0.020 0.998 SB MBF 500Kb/s 0.442 2.368 0.250 25.667 SB PBF 500Kb/s 0.078 0.268 0.000 0.998 RB MBF 500Kb/s 4.355 3.394 7.000 7.000 RB PBF 500Kb/s 0.000 0.021 0.000 0.998 As we discussed in Section -A, ack et air algorithms can- not report measured bandwidth higher than the potential bandwidth. conclude from these results that we must decrease the windo size to detect changes in bandwidth quickly This supports the conclusion in

Section IV -E that we must mak tradeof between timeliness and accurac in choosing win- do size. C. otential Bandwidth iltering In this section, we in estigate the ef fecti eness of our ne filtering algorithm, PBF As discussed in Section -B, PBF is designed to ercome the problems that the standard Mea- sured Bandwidth Filtering (MBF) algorithm has on traf fic with mostly lo potential bandwidth pack ets. It ould also be desirable if PBF performed no orse than MBF on traf fic where most of the pack ets ha high potential bandwidth. In
Page 11
order to test PBF we used

our simulated Ethernet netw ork for the mostly high potential bandwidth traf fic and the up- link of simulated asymmetric cable modem netw ork for the mostly lo potential bandwidth traf fic. In addition to ary- ing amount of erall congestion, we also set up second TCP connection between the same tw hosts as the first con- nection, ut in the re erse direction. This ensures that there are at least fe high potential bandwidth pack ets in the out- bound direction. The results are summarized in able VI. The first four lines sho that PBF is equi alent to MBF on the Ethernet

netw ork. The ne xt four lines sho that PBF is an ywhere from 37% to 435% more accurate on erage than MBF on the cable modem netw ork. MBF also has significantly lo wer median than PBF so MBF poor erage accurac cannot be blamed on fe outliers. Examination of the trace erifies the analysis of Section that the outbound link is filled with acks which erwhelm the fe data pack ets. This causes MBF to incorrectly report the bandwidth of the acks as the true bandwidth. PBF is able to filter out those samples and discern the true bandwidth. In the future, we are interested in

simulating more net- orks and algorithms, calculating dif ferent metrics, and test- ing our ideas in the Internet. One type of netw ork we did not simulate is wireless netw ork. ns has support for wireless netw orks, ut this as not fully functional at the time we did our xperiments. ireless netw orks are interesting to xam- ine because the tend to ha high loss rates and high ari- ance in latenc both situations that ould challenge ack et air algorithms. In addition, we ould lik to simulate the athchar algorithm so that we can compare its accurac to the passi techniques. ould lik to apply the

methods described here to cal- culate ailable bandwidth. As mentioned in section II-B, some applications ould find that more useful metric. belie that the methods described here ould apply with some minor modifications. are currently using these bottleneck bandwidth mea- surement algorithms to implement nettimer which can tak li measurements from the Internet. xamined the characteristics of current bandwidth measurement techniques and found se eral problems. propose statistically rob ust algorithms which ercome these problems by gi ving timely estimates, being agile in the ace of

bandwidth changes, gi ving more fle xibility in deplo yment, and orking with ariety of dif ferent traf fic types. Our sim- ulation results sho that our implementation is more than 37% more accurate than pre vious techniques. conclude that accurate, fle xible and scalable band- width measurement is not only possible, ut desirable in or der to maintain the gro wth and reliability of man Internet applications. ould lik to ackno wledge the help of se eral peo- ple. Stuart Cheshire pro vided the code for utimer which as the inspiration for nettimer Guido Appenzeller sug- gested in

estigating rob ust methods of calculating density Marcos de Alba pointed out an error in the pathchar anal- ysis. ern axson ga us an early cop of tcpanaly and pro vided aluable feedback on Potential Bandwidth Filter ing. Craig artridge pro vided advice on the moti ation. Fi- nally we ould lik to thank the man anon ymous re vie wers for their feedback. [1] Mary G. Bak er Xinhua Zhao, Stuart Cheshire, and Jonathan Stone. Supporting mobility in mosquitonet. In Pr oceedings of the 1996 USENIX ec hnical Confer ence January 1996. [2] Jean-Chrysostome Bolot. End-to-end pack et delay and loss beha

vior in the internet. In Pr oceedings of SIGCOMM 1993. [3] enneth L. Calv ert, Matthe B. Doar and Ellen Ze gura. Modeling internet topology IEEE Communications Ma gazine 1997. [4] Robert L. Carter and Mark E. Cro ella. Dynamic serv er selection us- ing bandwidth probing in wide-area netw orks. echnical Report U- CS-96-007, Boston Uni ersity 1996. [5] Robert L. Carter and Mark E. Cro ella. Measuring bottleneck link speed in pack et-switched netw orks. echnical Report U-CS-96-006, Boston Uni ersity 1996. [6] Stuart Cheshire and Mary Bak er Experiences with wireless netw ork in mosquitonet. In Pr

oceedings of the IEEE Hot Inter connects Sympo- sium 1995. [7] illiam Feller An Intr oduction to Pr obability Theory and its Appli- cations, volume II ile Eastern Limited, 1988. [8] an Jacobson. pathchar ftp://ftp.ee.lbl.go v/path cha r/, 1997. [9] Srini asan esha control-theoretic approach to flo control. In Pr oceedings of SIGCOMM 1991. [10] Ste en McCanne, Sally Flo yd, vin all, and Kannan aradhan et al. ns. http://www-mash.cs.berk ele .edu/ ns/, 1997. [11] ern axson. End-to-end internet pack et dynamics. In Pr oceedings of SIGCOMM 1997. [12] ern axson. Measur ements and Analysis of

End-to-End Internet Dy- namics PhD thesis, Uni ersity of California, Berk ele April 1997. [13] ern axson and Sally Flo yd. Why we don kno ho to simulate the internet. In Pr oceedings of the 1997 inter Simulation Confer ence 1997. [14] Da Scott. Multivariate Density Estimation: Theory Pr actice and isualization Addison esle 1992. [15] Srini asan Seshan, Mark Stemm, and Randy Katz. Spand: Shared pas- si netw ork performance disco ery In Pr oceedings of the USENIX Symposium on Internet ec hnolo gies and Systems 1997. [16] Ronald A. Thisted. Elements of Statistical Computing: Numerical Computation

Chapman and Hall, 1988.