/
The Internet at the Speed of Light Ankit Singla  Balakrishnan Chandrasekaran  P The Internet at the Speed of Light Ankit Singla  Balakrishnan Chandrasekaran  P

The Internet at the Speed of Light Ankit Singla Balakrishnan Chandrasekaran P - PDF document

test
test . @test
Follow
438 views
Uploaded On 2015-01-15

The Internet at the Speed of Light Ankit Singla Balakrishnan Chandrasekaran P - PPT Presentation

Brighten Godfrey Bruce Maggs University of Illinois at UrbanaChampaign Duke University Akamai singla2 pbgillinoisedu balac bmmcsdukeedu ABSTRACT For many Internet services reducing latency improves the user experience and increases revenue for the ID: 31807

Brighten Godfrey Bruce

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "The Internet at the Speed of Light Ankit..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

TheInternetattheSpeedofLightAnkitSinglay,BalakrishnanChandrasekaran],P.BrightenGodfreyy,BruceMaggs]yUniversityofIllinoisatUrbana–Champaign,]DukeUniversity,Akamaiy{singla2,pbg}@illinois.edu,]{balac,bmm}@cs.duke.eduABSTRACTFormanyInternetservices,reducinglatencyimprovestheuserexperienceandincreasesrevenuefortheserviceprovider.Whileinprinciplelatenciescouldnearlymatchthespeedoflight,wendthatinfrastructuralinefcienciesandproto-coloverheadscausetoday'sInternettobemuchslowerthanthisbound:typicallybymorethanone,andoften,bymorethantwoordersofmagnitude.Bridgingthislargegapwouldnotonlyaddvaluetotoday'sInternetapplications,butcouldalsoopenthedoortoexcitingnewapplications.Thus,weproposeagrandchallengeforthenetworkingresearchcom-munity:aspeed-of-lightInternet.Toinformthisresearchagenda,weinvestigatethecausesoflatencyinationintheInternetacrossthenetworkstack.Wealsodiscussafewbroadavenuesforlatencyimprovement.CategoriesandSubjectDescriptorsC.2.1[Computer-CommunicationNetworks]:NetworkAr-chitectureandDesign;C.2.5[Computer-CommunicationNetworks]:LocalandWide-AreaNetworks—InternetGeneralTermsMeasurement;Design;Performance1.INTRODUCTIONReducinglatencyacrosstheInternetisofimmensevalue—measurementsandanalysisbyInternetgiantshaveshownthatshavingafewhundredmillisecondsfromthetimeforatransactioncantranslateintomillionsofdollars.ForAma-zon,a100mslatencypenaltyimpliesa1%salesloss[29];forGoogle,anadditionaldelayof400msinsearchresponsesreducessearchvolumeby0:74%;andforBing,500msoflatencydecreasesrevenueperuserby1:2%[14,22].Under-cuttingacompetitor'slatencybyaslittleas250msiscon-sideredacompetitiveadvantage[8]intheindustry.Evenmorecrucially,thesenumbersunderscorethatlatencyisakeydeterminantofuserexperience.Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributedforprotorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationontherstpage.Tocopyotherwise,torepublish,topostonserversortoredistributetolists,requirespriorspecicpermissionand/orafee.HotNets'14,October27–28,2014,LosAngeles,CA,USA.Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM.ACM978-1-4503-3256-9/14/10...$15.00http://dx.doi.org/10.1145/2670518.2673876.Whilelatencyreductionsofafewhundredmillisecondsarevaluable,inthiswork,wetakethepositionthatthenet-workingcommunityshouldpursueamuchmoreambitiousgoal:cuttingInternetlatenciestoclosetothelimitingphys-icalconstraint,thespeedoflight,roughlyonetotwoordersofmagnitudefasterthantoday.WhatwouldsuchadrasticreductioninInternetlatencymean,andwhyisitworthpur-suing?Beyondtheobviousgainsinperformanceandvaluefortoday'sapplications,suchatechnologicalleaphastrulytransformativepotential.Aspeed-of-lightInternetmayhelprealizethefullpotentialofcertainapplicationsthathavesofarbeenlimitedtothelaboratoryorhavenicheavailability,suchastelemedicineandtelepresence.Forsomeapplica-tions,suchasmassivemulti-playeronlinegames,thesizeoftheusercommunityreachablewithinalatencyboundmayplayanimportantroleinuserinterestandadoption,andasweshallseelater,lineardecreasesincommunicationlatencyresultinsuper-lineargrowthincommunitysize.Lowlaten-ciesontheorderofafewtensofmillisecondsalsoopenupthepossibilityofinstantresponse,whereusersareunabletoperceiveanylagbetweenrequestingapageandseeingitrenderedontheirbrowsers.Suchaneliminationofwaittimewouldbeanimportantthresholdinuserexperience.Alightning-fastInternetcanalsobeexpectedtospurthedevel-opmentofnewandcreativeapplications.Afterall,eventhecreatorsoftheInternethadnotenvisionedthemyriadwaysinwhichitisusedtoday.Giventhepromiseaspeed-of-lightInternetholds,whyistoday'sInternetmorethananorderofmagnitudeslower?Asweshowlater,thefetchtimeforjusttheHTMLforthelandingpagesofpopularWebsitesfromasetofgenerallywell-connectedclientsis,inthemedian,34timestheround-tripspeed-of-lightlatency.Inthe90thpercentileitis169slower.Whyarewesofarfromthespeedoflight?WhileourISPscompeteprimarilyonthebasisofpeakbandwidthoffered,bandwidthisnottheanswer.Bandwidthimprovementsarealsonecessary,butbandwidthisnolongerthebottleneckforasignicantfractionofthepopulation:forinstance,theaverageUSconsumerclocksinat5+Mbps,beyondwhich,theeffectofincreasingbandwidthonpageloadtimeissmall[27].Besides,projectslikeGoogleFiberandotherber-to-the-homeeffortsbyISPsarefurtherim-provingbandwidth.Ontheotherhand,ithasbeennotedinavarietyofcontextsfromCPUs,todisks,tonetworksthat`la-tencylagsbandwidth',andisamoredifcultproblem[32].Howthendowebeginaddressingtheorder-of-magnitudegapbetweentoday'sInternetlatenciesandthespeedoflight?1 Isspeed-of-lightconnectivityovertheInternetanunachiev-ablefantasy?No!Infact,thehigh-frequencytradingindus-tryhasalreadydemonstrateditsplausibility.InthequesttocutlatencybetweentheNewYorkandChicagostockexchanges,severaliterationsofthisconnectionhavebeenbuilt,aimedatsuccessivelyimprovinglatencybyjustafewmillisecondsattheexpenseofhundredsofmillionsofdol-lars[28].Inthemid-1980s,theround-triplatencywas14:5ms.Thiswascutto13:1msby2010byshorteningthephysicalberroute.In2012however,thespeedoflightinberwasdeclaredtooslow:microwavecommunicationcutround-triplatencyto9ms,andlaterdownto8:5ms[18,11].Thec-latency,i.e.,theround-triptraveltimebetweenthesametwolocationsalongtheshortestpathontheEarth'ssurfaceatthespeedoflightinvacuum,isonly0:6msless.AsimilarraceisunderwayalongmultiplesegmentsinEurope,includingLondon-Frankfurt[5].Inthiswork,weproposea`speed-of-lightInternet'asagrandchallengeforthenetworkingcommunity,andsuggestapathtothatvision.In§2,wediscussthepotentialimpactofsuchanadvanceonhowweusetheInternet,andmorebroadly,oncomputing.In§3,wemeasurehowlatenciesovertoday'sInternetcomparetoc-latency.In§4,webreakdownthecausesofInternetlatencyinationacrossthenet-workstack.Webelievethistobetherstattempttodirectlytacklethequestion`Whyarewesofarfromthespeedoflight?'.Using20+millionmeasurementsof28,000WebURLsservedfrom120+countries,westudytheimpactofbothinfrastructuralbottlenecksandnetworkprotocolsonla-tency.In§5,basedonourmeasurementsandanalysis,welayouttwobroadapproachestocuttingthelargegapbe-tweentoday'sInternetlatenciesanditsphysicallimits.2.THENEEDFORSPEEDAspeed-of-lightInternetwouldbeanadvancewithtremen-dousimpact.ItwouldenhanceusersatisfactionwithWebapplications,aswellasvoiceandvideocommunication.Thegamingindustry,wherelatencieslargerthan50mscanhurtgameplay[31],wouldalsobenet.Butbeyondthepromiseofthesevaluableimprovements,aspeed-of-lightInternetcouldfundamentallytransformthecomputinglandscape.Newapplications.Oneofcomputing'snatural,yetunreal-izedgoalsistocreateaconvincingexperienceofjoiningtwodistantlocations.Severalapplications—telemedicine,re-motecollaborativemusicperformance,andtelepresence—wouldbenetfromsuchtechnology,butarehamperedto-daybythelackofalowlatencycommunicationmechanism.Aspeed-of-lightInternetcouldmovesuchapplicationsfromtheirlimitedexperimentalscope,toubiquity.Andperhapswewillbesurprisedbythecreativenewapplicationsthatevolveinthatenvironment.1Illusionofinstantresponse.Aspeed-of-lightInternetcan 1“Newcapabilitiesemergejustbyvirtueofhavingsmartpeoplewithaccesstostate-of-the-arttechnology.”—BobKahnrealizethepossibilityofinstantresponse.Thelimitsofhu-manperceptionimplythatwenditdifculttocorrectlyordervisualeventsseparatedbylessthan30ms[7].Thus,ifresponsesovertheInternetwerereceivedwithin30msoftherequests,wewouldachievetheillusionofinstantre-sponse2.A(perceived)zerowait-timeforInternetserviceswouldgreatlyimproveuserexperienceandallowforricherinteraction.Immenseresources,bothcomputationalandhu-man,wouldbecome“instantly”availableoveraspeed-of-lightInternet.Super-linearcommunitysize.Manyapplicationsrequirethattheconnectedusersbereachablewithinacertainlatencythreshold,suchas30msround-tripforinstantresponse,orperhaps50msforamassivemulti-playeronlinegame.Thevalueoflowlatencyismagniedbythefactthatthesizeoftheavailableusercommunityisasuperlinearfunctionofnetworkspeed.TheareaontheEarth'ssurfacereachablewithinagivenlatencygrowsnearly3quadraticallyinlatency.Usingpopulationdensitydata4revealssomewhatslower,butstillsuper-lineargrowth.Wemeasuredthenumberofpeo-plewithina30msRTTfrom200capitalcitiesoftheworldatvariouscommunicationspeeds.Fig.1(a)showstheme-dian(acrosscities)ofthepopulationreached.IfInternetlatencieswere20worsethanc-latency(x-axis=0:05c),wecouldreach7:5millionpeople“instantly”.A10latencyimprovement(x-axis=0:5c)wouldincreasethatcommunitysizeby49,to366million.Therefore,thevalueoflatencyimprovementismagnied,perhapspushingsomeapplica-tionstoreachcriticalmass.Cloudcomputingandthinclients.Anotherpotentialef-fectofaspeedierInternetisfurthercentralizationofcom-puteresources.GoogleandVMwarearealreadyjointlywork-ingtowardsthethinclientmodelthroughvirtualization[23].Currently,theirDesktop-as-a-Serviceofferingistargetedatbusinesses,withthecustomercentralizingmostcomputeanddatainacluster,anddeployingcheaperhardwareaswork-stations.Amajordifcultywithextendingthismodeltoper-sonalcomputingtodayisthemuchlargerlatencyinvolvedinreachinghomeusers.Likewise,inthemobilespace,thereisinterestinofoadingsomecomputetothecloud,therebyexploitingdataandcomputationalresourcesunavailableonuserdevices[19].Aspriorwork[25]hasargued,however,toachievehighlyresponsiveperformancefromsuchappli-cationswouldtodayrequirethepresenceofalargenumberofdatacenterfacilities.WithaspeedierInternet,the`thinclient'modelbecomesplausibleforbothdesktopandmo-bilecomputingwithfarfewerinstallations.Forinstance,iftheInternetoperatedathalfthespeedoflight,almostallof 2Thisisaconvenientbenchmarknumber,buttheexactnumberwillvarydependingonthescenario.Fora30msresponsetime,theInternetwillactuallyneedtobealittlefasterbecauseofserver-siderequestprocessingtime,screenrefreshdelay,etc.Andthe`instantresponse'thresholdwilldifferforaudiovs.visualapplications.3Becauseitisasphere,notaplane.4Throughout,weusepopulationestimatesfor2010[15].2 (a) (b) (c)Figure1:Theimpactofcommunicationspeedoncomputingandpeople.Withincreasingcommunicationspeed:(a)thepopulationwithin30msround-triptimegrowssuper-linearly;(b)thenumberoflocations(e.g.datacentersorCDNnodes)neededforglobal30msreachabilityfromatleastonelocationfallssuper-linearly;and(c)thetradeoffbetweenthegloballatencytargetandthenumberoflocationsrequiredtomeetitimproves.thecontiguousUScouldbeservedinstantlyfromjustonelocation.Fig.1(b)showsthenumberoflocationsneededfor99%oftheworld'spopulationtobeabletoinstantlyreachatleastonelocation—aswedecreaseInternetlatency,thenumberoffacilitiesrequiredfallsdrastically,downtoonly6locationswithglobalspeed-of-lightconnectivity.(Thesenumberswereestimatedusingaheuristicplacementalgo-rithmandcouldpossiblybeimprovedupon.)ThisresultiscloselyrelatedtothatinFig.1(a)—withincreasingcom-municationspeed(which,givenalatencybound,determinesareachableradius),thepopulationreachablefromacentergrowssuper-linearly,andthenumberofcentersneededtocovertheentirepopulationfallssuper-linearly.Bettergeolocation.Aslatencygetsclosertothespeedoflight,latency-basedgeolocationgetsbetter,andintheex-tremecaseofexactc-latency,locationcanbepreciselytrian-gulated.Whilebettergeolocationprovidesbenetssuchasbettertargetingofservicesandmatchingwithnearbyservers,italsohasotherimplications,suchasforprivacy.Don'tCDNssolvethelatencyproblem?Contentdistribu-tionnetworkscutlatencybyplacingalargenumberofrepli-casofcontentacrosstheglobe,sothatformostcustomers,somereplicaisnearby.However,thisapproachhasitslimi-tations.First,someresourcessimplycannotbereplicatedormoved,suchaspeople.Second,CDNstodayareanexpen-siveoption,availableonlytolargerInternetcompanies.AspeedierInternetwouldsignicantlycutcostsforCDNsaswell,andinasense,democratizetheInternet.CDNsmakeatradeoffbetweencosts(determined,inpart,bythenum-berofinfrastructurelocations),andlatencytargets.ForanylatencytargetaCDNdesirestoachieveglobally,giventheInternet'scommunicationlatency,acertainminimumnum-beroflocationsarerequired.SpeedinguptheInternetim-provesthisentiretradeoffcurve.ThisimprovementisshowninFig.1(c),whereweestimate(usingourrandomplacementheuristic)thenumberoflocationsrequiredtoachievedif-ferentlatencytargetsfordifferentInternetcommunicationspeeds5:c 32,c 4,andc.Asisclearfromtheseresults,while 5Perourmeasurementsin§3,c 32isclosetothemedianspeedof Figure2:FetchtimeofjusttheHTMLofthelandingpagesofpopularWebsitesintermsofinationoverthespeedoflight.Inthemedian,fetchtimeis34slower.CDNswillstillbenecessarytohitgloballatencytargetsofafewtensofmilliseconds,theamountofinfrastructuretheyrequiretodosowillfalldrasticallywithaspeedierInternet.3.THEINTERNETISTOOSLOWWefetchedjusttheHTMLforlandingpagesof28,000popularWebsites6from400+PlanetLabnodesusingcURL[1].Foreachconnection,wegeolocatedtheWebserverusingcommercialgeolocationservices,andcomputedthetimeitwouldtakeforlighttotravelround-tripalongtheshortestpathbetweenthesameend-points,i.e.,thec-latency7.Hence-forth,werefertotheratioofthefetchtimetoc-latencyastheInternet'slatencyination.Fig.2showstheCDFofthisin-ationover6millionconnections.ThetimetonishHTMLretrievalis,inthemedian,34thec-latency,whilethe90thpercentileis169.Thus,theInternetistypicallymorethananorderofmagnitudeslowerthanthespeedoflight.We fetchingjusttheHTMLforthelandingpagesofpopularwebsitestoday,andc 4isclosetothemedianpingspeed.6WepooledAlexa's[9]top500Websitesfromeachof120+coun-triesandusedtheuniqueURLs.WefollowedredirectsoneachURL,andrecordedthenalURLforuseinexperiments.Inourexperiments,weignoredanyURLsthatstillcausedredirects.WeexcludeddataforthefewhundredwebsitesusingSSL.Wedidnd,asexpected,thatSSLincurredseveralRTTsofadditionallatency.7Wehaveground-truthgeolocationforPlanetLabnodes—whilethePlanetLabAPIyieldsincorrectlocationsforsomenodes,theseareeasytoidentifyandremovebasedonsimplelatencytests.3 Figure3:Variouscomponentsoflatencyination.Onepointismarkedoneachcurveforsakeofclarity.notethatPlanetLabnodesaregenerallywell-connected,andlatencycanbeexpectedtobepoorerfromthenetwork'strueedge.4.WHYISTHEINTERNETSOSLOW?Toanswerthisquestion,weattempttobreakdownthefetchtimeacrosslayers,frominationinthephysicalpathfollowedbypacketstotheTCPtransfertime.WeusecURLtoobtainthetimeforDNSresolution,TCPhandshake,TCPdatatransfer,andtotalfetchtimeforeachconnection.Foreachconnection,wealsorunatraceroutefromtheclientPlanetLabnodetotheWebserver.Wethengeolocateeachrouterinthetraceroute,andconnectsuccessiverouterswiththeshortestpathsontheEarth'ssurfaceasanapproximationfortheroutethepacketsfollow.Wecomputetheroundtriplatencyatthespeedoflightinberalongthisapproximatepath,andrefertoitasthe`router-pathlatency'.Wenor-malizeeachlatencycomponentbythec-latencybetweentherespectiveconnection'send-points.Welimitthisanalysistoroughlyonemillionconnections,forwhichweusedcURLtofetchtherst32KB(22full-sizedpackets)ofdatafromtheWebserver8.TheresultsareshowninFig.3.ItisunsurprisingthatDNSresolutionsarefasterthanc-latencyabout20%ofthetime—inthesecases,theserverhappenstobefartherthantheDNSresolver.(TheDNScurveisclippedatthelefttomoreclearlydis-playtheotherresults.)Inthemedian,DNSresolutionsare5:4inatedoverc-latency,withamuchlongertail.Infact,wefoundthatwhenweconsiderthetopandbottom10per-centilesoftotalfetchtimeination,DNSplaysasignicantrole–amongthefastest10%ofpages,eventheworstDNSinationislessthan3,whilefortheslowest10%ofpages,eventhemedianDNStimeisworsethan20inated.Fig.3alsorevealsthesignicantinationinTCPtrans-fertime—8:7inthemedian.MostofthisissimplyTCP'sslowstartmechanismatwork—withonly32KBbe-ingfetched,bandwidthisnotthebottleneckhere.TheTCPhandshake(countingonlytheSYNandSYN-ACK)is3:2 8cURLallowsexplicitspecicationofthenumberofbytestofetch,butsomeserversdonothonorsucharequest.Measurementsfromconnectionsthatdidnotfetchroughly32KBwerediscarded.worsethanc-latencyinthemedian,roughlythesameastheroundtriptime(minimumpinglatency).NotethatthemediansofinationinDNS,TCPhand-shake,andTCPtransfertimedonotadduptothemedianinationintotaltime.Thisisbecauseofthelongtailsoftheinationsineachofthese.HavinganalyzedthesomewhateasiertoexamineTCPandDNSfactors,wedevotetherestofthissectiontoacloserlookatinationinthelowerlayers:physicalinfrastructure,routing,andqueuingandbufferbloat.4.1PhysicalinfrastructureandroutingFig.3showsthatinthemedian,therouter-pathisonly2:3inated.(Thelongtailis,inpart,explainedby`hair-pinning',i.e.,packetsbetweennearbyend-pointstraversingcircuitousroutesacrosstheglobe.Forinstance,insomecases,packetsbetweenend-pointsinEasternChinaandTai-wanwereseeninourtracestravelingrsttoCalifornia.)Notethat1:5inationwouldoccurevenalongtheshort-estpathalongtheEarth'ssurfacebecausethespeedoflightinberisroughly2=3rdthespeedoflightinair/vacuum.Excludingthisinationfromthemedianleavesafurtherin-ationof1:53.Whilethismayappearsmall,aswedis-cussbelow,ourestimateisoptimistic,andoverall,inationintheselowerlayersplaysasignicantrole.Weseesomeseparationbetweentheminimumpingtimeandtherouter-pathlatency.Thisgapmaybeexplainedbytwofactors:(a)tracerouteoftendoesnotyieldresponsesfromalltheroutersonthepath,inwhichcaseweessentiallyseearticiallyshorterpaths—ourcomputationsimplyas- (a) (b)Figure4:ComparedtotheshortestdistancealongtheEarth'ssurface,thereissignicantlymoreinationinberlengthsthaninroaddistancesinboth(a)Internet2connections;and(b)GÉANTconnections.4 sumesthatthereisadirectconnectionbetweeneachpairofsuccessivereplyingrouters;and(b)evenbetweensuccessiverouters,thephysicalpathmaybelongerthantheshortestarcalongtheEarth'ssurface.Weinvestigatethelatteraspectusingdatafromtworesearchnetworks:Internet2[4]andGÉANT9.Weobtainedpoint-to-pointberlengthsforthesenetworksandrananallpairsshortestpathscomputationonthenetworkmapstocalculateberlengthsbetweenallpairsofendpoints.WealsocalculatedtheshortestdistancealongtheEarth'ssurfacebetweeneachpair,andobtainedtheroaddistancesusingtheGoogleMapsAPI[3].Fig.4showstheinationinberlengthsandroaddistancescomparedtotheshortestdistance.Roaddistancesareclosetoshortestdis-tances,whileberlengthsaresignicantlylargerandhavealongtail.Evenwhenonlypoint-to-pointconnectionsareconsidered,berlengthsareusually1:5-2largerthanroaddistances.Whileitistemptingtodismissthe3:2inationinthemedianpingtimeinlightofthelargerinationfactorsinDNS(5:4)andTCPtransfer(8:7),eachofDNS,TCPhandshake,andTCPtransfertimesuffersduetoinationinthephysicalandnetworklayers.Whatiftherewasnoinationinthelowerlayers?Foranapproximateanswer,wecannormalizeinationinDNS,TCPhandshake,andTCPtransfertimetothatintheminimumpingtime.Normalizedbythemedianinationinpingtime(3:2),themediansare1:7,1:0,and2:7respectively.Thus,inationatthelowerlayersitselfplaysabigroleinInternetlatencyination.4.2Loss,queuing,andbufferbloatFig.3showsthattheTCPhandshaketime(timebetweencURL'ssendingtheSYNandreceivingtheSYN-ACK)isnearlythesameastheminimumpinglatency,indicating,perhaps,alackofsignicantqueuingeffects.Nevertheless,itisworthconsideringwhetherpacketlossesorlargepacketdelaysanddelayvariationsaretoblameforpoorTCPper-formance.Oversizedandcongestedrouterbuffersonthepropagationpathmayexacerbatesuchconditions–asitua-tionreferredtoasbufferbloat.InadditiontofetchingtheHTMLforthelandingpage,foreachconnection,wealsosent30pingsfromtheclienttotheserver'saddress.Wefoundthatvariationinpingtimesinsmall:the2nd-longestpingtimeisonly1:2%largerthantheminimumpingtimeinthemedian.However,becausepings(usingICMP)mightusequeuesseparatefromWebtrafc,wealsousedtcpdump[6]attheclienttologpacketarrivaltimesfromtheserver,andanalyzedtheinter-arrivalgapsbetweenpackets.Welimitedthisanalysistothesameroughlyonemillionconnectionsasbefore.Morethan95%oftheseconnectionsexperiencednopacketloss(estimatedaspacketsre-orderedbymorethan3ms). 9DataonbermileagesfromGÉANT[2],thehigh-speedpan-Europeanresearchandeducationnetwork,wasobtainedthroughpersonalcommunicationwithXavierMartins-Rivas,DANTE.DANTEistheprojectcoordinatorandoperatorofGÉANT.UndernormalTCPoperation,atthisdatatransfersize,mostpacketscanbeexpectedtoarrivewithsub-millisecondinter-arrivaltimes,anestimated13%ofpacketswithagapofoneRTT(asthesenderwaitsforACKsbetweenwin-dows).Only5%ofallinter-arrivalgapsdidnotfallintoeitherofthosetwocategories.Further,formorethan80%ofallconnections,thelargestgapwasclosetooneRTT.Basedonthesenumbers,formostconnections,wecanruleoutthepossibilityofasinglelargegap,aswellasthatofmultiplesmallergapsadditivelycausingalargedelay.Wecansafelyconcludethatformostoftheseconnections,bufferbloatcan-notexplainthelargelatencyinationobserved.WeusetheaboveresultsfromPlanetLabmeasurementsonlytostressthateveninscenarioswherebufferbloatisclearlynotthedominantcauseofadditionallatency,signif-icantotherproblemsinateInternetlatenciesbymorethananorderofmagnitude.Further,forapeekatbufferbloatinend-userenvironments,wealsoexaminedRTTsinasampleofTCPconnectionhandshakesbetweenAkamai'sserversandclients(end-users)overa24-hourtimeperiod,passivelyloggedbyAkamaiservers.(Alargefractionofroutestopop-ularprexesareunlikelytochangeatthistime-scaleintheInternet[35].Theconnectionsunderconsiderationherearephysicallymuchshorter,makingroutechangesevenmoreunlikely.)Weanalyzedallserver-clientpairsthatappearedmorethanonceinourdata:10millionpairs,ofwhich90%had2to5observations.Wecomputedtheinationoverc-latencyoftheminimum(Min),average(Avg)andmaxi-mum(Max)ofthesetofRTTsobservedbetweeneachpair;forcalculatingtheinationswehadgroundtruthonthelo-cationoftheservers,andtheclientsweregeolocatedusingdatafromAkamaiEdgeScape[13].Fig.5comparestheCDFsofinationintheMin,AvgandMaxoftheRTTs.Inthemedian,theAvgRTTis1:9theMinRTT(i.e.,inabsoluteterms,Avgis30mslargerthanMin).Bufferbloatiscertainlyasuspectforthisdiffer-ence,althoughserverresponsetimesmayalsoplayarole.Notehowever,thatinourPlanetLabmeasurements,wherebufferbloatdoesnotplayacentralrole,weobserved(inthemedian)apinglatencyof124ms.Ifweaddedanadditional30msof“edgeination”,itwouldcompriselessthan20% Figure5:LatencyinationinRTTsbetweenendusersandAkamaiservers,andthevariationtherein.ThedifferencebetweentheminimumandaverageRTTscouldpossiblybeattributedtobufferbloat.5 ofthetotalinationinthepinglatency,whichitselfisafrac-tionoftheInternet'slatencyination.Thus,tosummarize,loss,queuing,andbufferbloatdonotexplainmostofthelargelatencyinationintheInternet.5.FAST-FORWARDTOTHEFUTUREInlinewiththecommunity'sunderstanding,ourmeasure-mentsafrmthatTCPtransferandDNSresolutionareim-portantfactorscausinglatencyination.However,inationatlowerlayersisequally,ifnotmoreimportant.Thus,be-low,welayouttwobroadideasfordrasticallycuttingInter-netlatenciestargetingeachoftheseproblems.Aparallellow-latencyinfrastructure:MostowsontheInternetaresmallinsize,withmostofthebytesbeingcar-riedinasmallfractionofows[41].Thus,itisconceiv-ablethatwecouldimprovelatencyforthelargefractionofsmall-sizedowsbybuildingaseparatelow-latencylow-bandwidthinfrastructuretosupportthem.SuchanetworkcouldconnectmajorcitiesalongtheshortestpathsontheEarth'ssurface(atleastwithinthecontinents)usingac-speedmedium,suchaseithermicrowaveorpotentiallyhol-lowber[20].Suchavisionmaynotbefar-fetchedonthetimehorizonofadecadeortwo.AsFig.4shows,theroadnetworktodayismuchclosertoshortestpathsthanthebernetwork.Roadconstructionistwoordersofmagnitudecostlierpermilethanber[16,33].Further,theadditionalcostoflayingberalongnewroadsorroadsthatarebeingrepavedisevensmaller.Astheroadin-frastructureisrepairedandexpandedoverdecades,itseemsfeasibletoincludeberoutlayinsuchprojects.Infact,alongtheselines,legislationrecentlyproposedintheUnitedStatesCongresswouldmakeitmandatorytoinstallberconduitsaspartofanyfutureFederalhighwayprojects[17].LatencyoptimizationsbyISPs:ISPs,byvirtueofobserv-ingreal-timetrafc,areinperhapsthebestpositiontomakelatencyoptimizationsforclients.Forinstance,anISPcankeeptrackoftheTCPwindowsizesachievedbyowsonaper-prexbasis.Itcanthendirectclientstousethesewin-dowsizes,therebyreducingtheorder-of-magnitudeslow-downduetoTCPtransfertimethatweseeinFig.3.Like-wise,ISPscanmaintainpoolsofTCPconnectionstopopularwebservicesandsplicetheseontoclientsthatseektocon-necttotheservices,eliminatingtheTCPhandshaketime.AsimilaroptimizationisalreadybeingusedbyCDNs—Aka-maimaintainspersistentTCPconnectionsbetweenitsownserversaswellasfromitsserverstocontentproviders,andclientsonlyconnecttoanearbyAkamaiserver,whichmaythenpatchtheconnectiontoadistantlocation[12].ISPscanalsomakepredictiveoptimizations.Forinstance,anISPmayobservethatanyclientthatrequestsacertainWebpagethenrequestsnameresolutionforcertainotherdomains,orthefetchingofcertainresources.TheISPcanthenproac-tivelyresolvesuchnamesorfetchsuchresourcesfortheclient.Wealsoobservedin§4thatthetailDNSresolutiontimeplaysasignicantrole.RecentworkbyVulimirietal.[38]illustratesasimpleandeffectivemethodofsubstantiallycut-tingthistailtime–redundancyinqueries.ThisoptimizationcanbedeployedeitherbyISPs,makingredundantqueriesonbehalfofclients,orbytheclientsthemselves.6.RELATEDWORKThereisalargebodyofworkonreducingInternetlatency.However,thisworkhasbeenlimitedinitsscope,itsscale,andmostcrucially,itsambition.Severaleffortshavefocusedonparticularpieces;forexample,[34,42]focusonTCPhandshakes;[21]onTCP'sinitialcongestionwindow;[38]onDNSresolution;[30,24]onroutinginationduetoBGPpolicy.Otherworkhasdiscussedresultsfromsmallscaleexperiments;forexample,[36]presentsperformancemea-surementsfor9popularWebsites;[26]presentsDNSandTCPmeasurementsforthemostpopular100Websites.TheWProf[39]projectbreaksdownWebpageloadtimefor350Webpagesintocomputationalaspectsofpagerendering,aswellasDNSandTCPhandshaketimes.Wangetal.[40]in-vestigatelatencyonmobilebrowsers,butfocusonthecom-puteaspectsratherthannetworking.Thecentralquestionwehavenotseenanswered,orevenposedbefore,is`Whyarewesofarfromthespeedoflight?'.Eventheramicationsofaspeed-of-lightInternethavenotbeenexploredinanydepth—howwouldsuchanadvancechangecomputinganditsroleinourlives?Answeringthesequestions,andtherebyhelpingtosettheagendafornetwork-ingresearchinthisdirectionisourwork'sprimaryobjective.The2013WorkshoponReducingInternetLatency[10]focusedonpotentialmitigationtechniques,withbufferbloatandactivequeuemanagementbeingamongthecenterpieces.Oneinterestingoutcomeoftheworkshopwasaqualitativechartoflatencyreductiontechniques,andtheirpotentialim-pactandfeasibility(Fig.1in[10]).Inasimilarvein,oneobjectiveofourworkistoquantifythelatencygaps,sepa-ratingoutfactorswhicharefundamental(likethec-bound)fromthosewemighthopetoimprove.Thegoalofachievinglatenciesimperceptibletohumanswasalsoarticulated[37].Wesharethatvision,andin§2discussthepossibleimpactsofthattechnologicalleap.7.CONCLUSIONSpeed-of-lightInternetconnectivitywouldbeatechno-logicalleapwithphenomenalconsequences,includingthepotentialfornewapplications,instantresponse,andradi-calchangesintheinteractionsbetweenpeopleandcomput-ing.Toshedlightonwhat'skeepingusfromthisvision,wehaveattemptedtoquantifythelatencygapsintroducedbytheInternet'sphysicalinfrastructureanditsnetworkpro-tocols,ndingthatinfrastructuralgapsareassignicant,ifnotmorethanprotocoloverheads.Wehopethatthesemea-surementswillformtherststepsinthenetworkingcom-munity'smethodicalprogresstowardsaddressingthisgrandchallenge.6 8.REFERENCES[1]cURL.http://curl.haxx.se/.[2]GÉANT.http://www.geant.net/.[3]GoogleMapsAPI.http://goo.gl/I4ypU.[4]Internet2.http://www.internet2.edu/.[5]QuincyExtremeDataservice.http://goo.gl/wSRzjX.[6]tcpdump.http://www.tcpdump.org/.[7]TemporalConsciousness,StanfordEncyclopediaofPhilosophy.http://goo.gl/UKQwy7.[8]TheNewYorkTimesquotingMicrosoft's“SpeedSpecialist”,HarryShum.http://goo.gl/G5Ls0O.[9]Top500SitesinEachCountryorTerritory,Alexa.http://goo.gl/R8HuN6.[10]WorkshoponReducingInternetLatency,2013.http://goo.gl/kQpBCt.[11]J.Adler.RagingBulls:HowWallStreetGotAddictedtoLight-SpeedTrading.http://goo.gl/Y9kXeS.[12]Akamai.AcceleratingDynamicContentwithAkamaiSureRoute.http://goo.gl/bUh1s7.[13]Akamai.EdgeScape.http://goo.gl/qCHPh1.[14]J.Brutlag.SpeedMattersforGoogleWebSearch.http://goo.gl/t7qGN8,2009.[15]CenterforInternationalEarthScienceInformationNetwork(CIESIN),ColumbiaUniversity;UnitedNationsFoodandAgricultureProgramme(FAO);andCentroInternacionaldeAgriculturaTropical(CIAT).GriddedPopulationoftheWorld:FutureEstimates(GPWFE).http://sedac.ciesin.columbia.edu/gpw,2005.Accessed:2014-01-12.[16]ColumbiaTelecommunicationsCorporation.BriefEngineeringAssessment:CostEstimateforBuildingFiberOpticstoKeyAnchorInstitutions.http://goo.gl/ESqVPW.[17]CongressionalBills112thCongress.BroadbandConduitDeploymentActof2011.http://goo.gl/9kLQ4X.[18]C.Cookson.TimeisMoneyWhenitComestoMicrowaves.http://goo.gl/PspDwl.[19]E.Cuervo.EnhancingMobileDevicesthroughCodeOfoad.PhDthesis,DukeUniversity,2012.[20]DARPA.NovelHollow-CoreOpticalFibertoEnableHigh-PowerMilitarySensors.http://goo.gl/GPdb0g.[21]N.Dukkipati,T.Rece,Y.Cheng,J.Chu,T.Herbert,A.Agarwal,A.Jain,andN.Sutin.AnArgumentforIncreasingTCP'sInitialCongestionWindow.SIGCOMMCCR,2010.[22]EricSchurman(Bing)andJakeBrutlag(Google).PerformanceRelatedChangesandtheirUserImpact.http://goo.gl/hAUENq.[23]ErikFrieberg,VMWare.GoogleandVMwareDoubleDownonDesktopasaService.http://goo.gl/5quMU7.[24]L.GaoandF.Wang.TheExtentofASPathInationbyRoutingPolicies.GLOBECOM,2002.[25]K.Ha,P.Pillai,G.Lewis,S.Simanta,S.Clinch,N.Davies,andM.Satyanarayanan.TheImpactofMobileMultimediaApplicationsonDataCenterConsolidation.IC2E,2013.[26]M.A.HabibandM.Abrams.AnalysisofSourcesofLatencyinDownloadingWebPages.WEBNET,2000.[27]IlyaGrigorik(Google).Latency:TheNewWebPerformanceBottleneck.http://goo.gl/djXp3.[28]G.Laughlin,A.Aguirre,andJ.Grundfest.InformationTransmissionBetweenFinancialMarketsinChicagoandNewYork.arXiv:1302.5966v1,2013.[29]J.Liddle.AmazonFoundEvery100msofLatencyCostThem1%inSales.http://goo.gl/BUJgV.[30]W.Mühlbauer,S.Uhlig,A.Feldmann,O.Maennel,B.Quoitin,andB.Fu.ImpactofRoutingParametersonRouteDiversityandPathInation.ComputerNetworks,2010.[31]L.PantelandL.C.Wolf.OntheImpactofDelayonReal-TimeMultiplayerGames.NOSSDAV,2002.[32]D.A.Patterson.LatencyLagsBandwidth.CommunicationsoftheACM,2004.[33]Planning&Markets,UniversityofSouthernCalifornia.HighwayConstructionCostsunderGovernmentProvision.http://goo.gl/pJHFSB.[34]S.Radhakrishnan,Y.Cheng,J.Chu,A.Jain,andB.Raghavan.TCPFastOpen.CoNEXT,2011.[35]J.Rexford,J.Wang,Z.Xiao,andY.Zhang.BGPRoutingStabilityofPopularDestinations.ACMSIGCOMMWorkshoponInternetMeasurment,2002.[36]S.Sundaresan,N.Magharei,N.Feamster,andR.Teixeira.MeasuringandMitigatingWebPerformanceBottlenecksinBroadbandAccessNetworks.IMC,2013.[37]D.Täht.OnReducingLatenciesBelowthePerceptible.WorkshoponReducingInternetLatency,2013.[38]A.Vulimiri,P.B.Godfrey,R.Mittal,J.Sherry,S.Ratnasamy,andS.Shenker.LowLatencyviaRedundancy.CoNEXT,2013.[39]X.S.Wang,A.Balasubramanian,A.Krishnamurthy,andD.Wetherall.DemystifyPageLoadPerformancewithWProf.NSDI,2013.[40]Z.Wang.SpeedingUpMobileBrowserswithoutInfrastructureSupport.Master'sthesis,DukeUniversity,2012.[41]Y.Zhang,L.Breslau,V.Paxson,andS.Shenker.OntheCharacteristicsandOriginsofInternetFlowRates.ACMCCR,2002.[42]W.Zhou,Q.Li,M.Caesar,andP.B.Godfrey.ASAP:ALow-LatencyTransportLayer.CoNEXT,2011.7