/
Moderator: Scott Kipp, President of Ethernet Alliance, Prin Moderator: Scott Kipp, President of Ethernet Alliance, Prin

Moderator: Scott Kipp, President of Ethernet Alliance, Prin - PowerPoint Presentation

lindy-dunigan
lindy-dunigan . @lindy-dunigan
Follow
401 views
Uploaded On 2016-07-17

Moderator: Scott Kipp, President of Ethernet Alliance, Prin - PPT Presentation

Panelist 1 Alan Weckel Vice President DellOro group Panelist 2 Dr Jeffery J Maki Distinguished Engineer Juniper Panelist 3 Dr Gordon Brebner Distinguished Engineer Xilinx ID: 408616

ethernet cfp4 400gbe cfp cfp4 ethernet cfp 400gbe 400g data cfp2 100g technology mpo gbe lr4 bit port 100gbe

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Moderator: Scott Kipp, President of Ethe..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Moderator: Scott Kipp, President of Ethernet Alliance, Principle Engineer, BrocadePanelist #1: Alan Weckel, Vice President, Dell’Oro groupPanelist #2: Dr. Jeffery J. Maki, Distinguished Engineer, JuniperPanelist #3: Dr. Gordon Brebner, Distinguished Engineer, Xilinx

Need for Speed: Beyond 100GbESlide2

© 2012 Ethernet Alliance Agenda

Introductions: Scott

Kipp,

Moderator

Panelist #1:

Alan

Weckel

,

10

, 40 and 100GbE Deployments in the Data Center

Panelist #2:

Dr.

Jeffery

J. Maki,

Stepping

Stones to Terabit-Class Ethernet

Panelist #3:

Dr. Gordon

Brebner

,

Technology

Advances in 400GbE Components

Q&A

2:40

– Live Broadcast from IEEE 802.3 Meeting in Orlando from John D’Ambrosia

Update on 400GbE Call For InterestSlide3

DisclaimerThe views WE ARE expressing in this presentation are our own personal views and should not be considered the views or positions of the Ethernet Alliance.Slide4

Bandwidth Growth

Increased #

of

Users

Increased

Access

Rates

and

Methods

Increased

Services

+

+

=

Bandwidth

Explosion

Everywhere

Source: nowell_01_0911.pdf citing Cisco Visual Networking Index (VNI) Global IP Traffic Forecast, 2010–2015,

http://www.ieee802.org/3/ad_hoc/bwa/public/sep11/nowell_01_0911.pdf

More Devices

More Internet Users

More Rich Media Content

Key Growth Factors

Speed Increasing

Broadband

2010- 7Mbps

2015 – 28 Mbps

15B Devices

In 2015

2010- 1 Minute video

2015 – 2 hour HDTV Movie

3B Users

In 2015Slide5

Bandwidth Growth Vs Ethernet SpeedsIP Traffic is growing ~ 30%/yearIf 400GbE is released in 2016, Ethernet speeds will grow at about 26%/year

Ethernet Speed (Gb/s)

Internet traffic normalized

to 100 in 2010

Internet traffic would grow ~10X by 2019 at 30%/year

Ethernet speeds to grow 4X by 2016 at 26%/yearSlide6

Ethernet Optical Modules

XENPAK

XPAK

X2

300 Pin MSA

100G

10G

1G

1995 2000 2005 2010 2015

Standard Completed

40G

100GbE

40GbE

Data Rate and Line Rate (b/s)

Key:

Ethernet

Standard Released

Module Form Factor Released

GbE

CFP

QSFP+

SFP

GBIC

10GbE

SFP+

XFP

CFP2

QSFP28

CFP4

CXPSlide7

Ethernet Speeds 2010-2025

Key:

Ethernet

Speeds

Ethernet Electrical

Interfaces

Hollow

Symbols = predictions

Stretched

Symbols = Time Tolerance

1T

100G

10G

400G

40G

4x10G

10X10G

2010 2015 2020 2025

Standard Completed

100GbE

10X10G

40GbE

4X10G

Data Rate and Line Rate (b/s)

16x25G

400GbE

16X25G

4x25G

100GbE

4X25G

8X50G

400GbE

8X50G

400GbE

4X100G

100GbE

1X100G

TbE

10X100G

nX100G

1.6TbE

16X100G

If Ethernet line rates doubles the line rate every 3 years at 26% CAGR, then 400GbE would come out in 2016 and

TbE would come out in 2020. Something will have to change.Slide8

Ethernet SuccessEthernet has been extremely successful at lowering the price/bit of bandwidthIf the cost of a new speed/technology is too high, then it is not widely deployedTechnology needs to be ripe for picking 400GbE is ripe with 100GbE technologyTbE isn’t ripe and a revolutionary breakthrough would be needed to get it before 2020

This panel will look at how high speeds of Ethernet are being deployed and the technology that is leading to the next generation of EthernetSlide9

10, 40 and 100GbE Deployments in the Data CenterAlan WeckelVice President, Data Center Research

Dell’Oro GroupSlide10

IntroductionProgress on server migration from 1 GbE to 10 GbE10G Base-T updateData center networking market update

40 GbE and 100 GbE market forecastsSlide11

OverviewDell’Oro Group is a market research firm that has been tracking the Ethernet Switch and Routing markets on a quarterly basis since 1996We also track the SAN market, Optical market, and most Telecom equipment markets

We produce quarterly market share reports that include port shipments as well as market forecastsSlide12

Petabytes per Second Shipped per Year

Data Center Bandwidth Shipping –

Ethernet SwitchingSlide13

Percent of Server Shipments

Switch Attach Rate on Servers

10 GbE

1 GbE

40 GbESlide14

Port Shipments in Thousands

Data Center Port Shipments –

10 G Base-T Port Shipments

10G Base-T controller

and adapter ports

10G Base-T switch portsSlide15

Port Shipments in Millions

Data Center Port Shipments –

Ethernet SwitchingSlide16

Port Shipments in Millions

Data Center Port Shipments –

Ethernet SwitchingSlide17

SummaryEthernet Switches will be responsible for the majority of 40 GbE and 100 GbE port shipments over the next five yearsForm-factor and cost driving 40 GbE over 100 GbE

10 GbE server access transition is key to higher speed adoptionSlide18

Stepping Stones to Terabit-Class Ethernet:Electrical Interface Rates andOptics Technology ReuseJeffery J. MakiDistinguished Engineer, OpticalJuniper Networks, Inc.Slide19

100GSlide20

CFP, CFP2 and CFP4 forSMF or MMF Applications

CFP(LC)

CFP4(LC)

CFP

CFP2

CFP4

CFP2(LC)

CFP MSA Form Factors:

http://www.cfp-msa.org/

Optical Connector

LC Duplex (depicted)

MPO

Courtesy of

TE ConnectivitySlide21

CFP

CFP2

CFP4

Module Electrical Lane Capability

12x10G

electrical

lanes

10x10G or 8x25G

electrical

lanes

4x25G

electrical

lanes

CAUI-4 for 4x25G

CPPI & CAUI for 10x10G

CAUI-4 for 4x25G

CAUI for 10x10GSlide22

CFP, CFP2, and CFP4 for 100G Ethernet SMF PMDTransmit side only depicted.

Current Options

Up to 10 km: 100GBASE-LR4

Up to 40 km: 100GBASE-ER4

Gear Box

1295.56 nm

1300.05 nm

1304.58 nm

1309.14 nm

Gear Box

1295.56 nm

1300.05 nm

1304.58 nm

1309.14 nm

CFP

CFP2

CFP4

4

λ

on LAN WDM

LAN WDM

LAN WDMSlide23

400GSlide24

Projection of Form Factor Evolution to 400G

CD-CFP

CFP4

CFP4

CFP4

CFP4

400G

CD-CFP2

16x25G

electrical

lanes

8x50G

electrical

lanes

speculation

defensible

CD-CFP4

4x100G

electrical

lanes

CFP

CFP2

CFP4

100G

Roman Numerals

XL = 40

C = 100

CD = 400Slide25

Likely MSA ActivityCFP MSA http://www.cfp-msa.org/ CD-CFP: Current CFP needs revamping to support 16 x 25GCD-CFP2: Current CFP2 is ready for 8 x 50G

CD-CFP4: Unclear

New CDFP MSA

http://www.cdfp-msa.org/

High-density form factor supporting 16 x 25G

From slide 26 of

http://www.ieee802.org/3/cfi/0313_1/CFI_01_0313.pdfSlide26

400G Optics RequirementsFirst-generation transceivers have to be implementable that meet and eventually do better than these requirementsSize (Width):  82 mm (CFP width, ~4 x CFP4)Cost:

4 x CFP4

Power:

24 W (4 x 6 W power profile of CFP4)

Improved bandwidth density transceivers will need higher rate electrical-lane technology

50G

100GSlide27

How 400G Ethernet Can Leverage 100G Ethernet

CFP4-LR4

CFP4-LR4

CFP4-LR4

CFP4-LR4

CFP4-LR4

CFP4-LR4

CFP4-LR4

CFP4-LR4

CFP4-LR4

CFP4-LR4

Duplex Single-Mode Fiber Infrastructure

100G Ethernet up to 10 km

400G Ethernet up to 10 km

Parallel Single-Mode Fiber Infrastructure

Only 8 Fibers UsedSlide28

Possible SMF Ethernet Road Map: 100G, 400G, 1.6T4 x 100GBASE-LR4or

“400GBASE-PSM4”

CD-CFP4(LC

)

CFP4(LC)

CFP4(LC)

CFP4(LC)

CD-CFP(MPO

)

400GBASE-???

CD-CFP2(LC

)

CFP4(LC)

4 x 400GBASE-???

or

“1600GBASE-PSM4”

CD-CFP4(LC)

(High-Density

100GE)

Early Adopter 400G

Mature 400G

Early Adopter 1.6T

Parallel Single Mode, 4 Lanes (PSM4)

4,

Tx

Fibers and

4,

Rx Fibers1x12 MPO Connector

CD-CFP2(MPO)

CD-CFP4(LC)

CD-CFP4(LC)

CD-CFP4(LC)Slide29

Early Adopter 400G using SMF Structured Cabling

Technology Reuse:

4 x 100GBASE-LR4

Parallel SMF:

“400GBASE-PSM4”

Courtesy of

CommscopeSlide30

Early Adopter 400G using MMF Structured Cabling

Technology Reuse:

4 x 100GBASE-SR4

Parallel MMF:

“400GBASE-SR16”

Parallel

Multi-Mode

100GBASE-SR4, 4 x 25G optical lanes:

4,

Tx

Fibers and 4, Rx Fibers using

1x12 MPO

“400GBASE-SR16”, 16 x 25G optical lanes:16, TX Fibers and 16, Rx Fibers using 2x16 MPOCourtesy of CommscopeSlide31

2 x 16 MPOMMF Breakout Cables—Enabling 400G Adoption

1 x 12 (8 used) MPO

1 x 12 (8 used) MPO

1 x 12 (8 used) MPO

1 x 12 (8 used) MPO

Courtesy of

USConec

2 x 16 MMF MT ferruleSlide32

100G Can Build 400G atthe Cost of 4 x 100G

Technology Reuse:

4 x 100GBASE-SR4

Parallel MMF:

“400GBASE-SR16”

Technology Reuse:

4 x 100GBASE-LR4

Parallel SMF:

“400GBASE-PSM4”Slide33

Early Adopter PMDParallel Fiber, SMF or MMFLeverage of mature PMD from previous speed of EthernetPlanned obsolescenceImplementation (with MPO connector) persists as high-density support of previous speed of Ethernet (e.g., 4 x 100G)Mature PMDSMF: Duplex SMF cabling (e.g., with LC duplex connector)

MMF: Lower fiber count MMF cabling

Ethernet PMD Maturity & Possible ObsolescenceSlide34

SMF Density Road Map

Front-Panel

Bandwidth

Density

(Relative)

100G

400G

1.6T

CFP(LC)

CFP2(LC)

CFP4(LC)

CFP4(LC)

4 x

or

CD-CFP(MPO

)

CD-CFP2(LC

)

CD-CFP4(LC

)

CD-CFP4(LC

)

4 xCD-CFP2(MPO

)

CD-CFP2(MPO)1

248

16

Port Bandwidth

(mature)

(early adopter)

(mature)

(mature)

(earlyadopter)(early adopter)Slide35

SummaryForm-factor road map for bandwidth evolutionEarly adopter 400G Ethernet by reusing 100G module and parallel cabling, SMF or MMFNeed for a new, 2 x 16 MMF MT ferrulePossible common module for 400G Ethernet and high-density (4-port) 100G EthernetNeed for new electrical interface definitions supporting lane rates at50G

100GSlide36

Gordon BrebnerDistinguished EngineerXilinx, Inc.Technology Advances in 400GbE ComponentsSlide37

400GbE PCS/MACExpect first: 16 PCS lanes, each at 25.78125 GbpsGlueless interface to optics

Possible

re-use of the 802.3ba

PCS

Other options possible for PCS, maybe native FEC

Later: 8 lanes, each at 51.56Gbps

Or 4 lanes with 2 bits/symbol at 56Gbaud (e.g. PAM4)

Packet size 64 bytes to 9600 bytes

Use 100GbE building blocks where possibleSlide38

Silicon technologyTechnology nodes (silicon feature size)130nm, 65nm, 40nm, 28/32nm, 20/22nm, 14/16nm Application-Specific Integrated Circuit (ASIC)Fixed chip

Increasingly expensive: need high volumes

Best suited to post-standardization Ethernet

Field Programmable Gate Array (FPGA)

Programmable logic chip

Suitable for prototyping and medium volumes

Best choice for pre-standardization EthernetSlide39

400GbE line/system bridge

500G

Interlaken

40 x 12.5G

or

48 x 10G

SERDES

Bridge

logic

400GbE

PMA/PCS

CDFP

or

4xCFP4

O

ptical

16 x 25G

SERDES

400GbE

MAC

Wide parallel data path between blocks

ASIC or FPGA chip

Line side

System sideSlide40

MAC rate = Width x Clock

400

Gbps

and 1

Tbps

Ethernet MAC options

MAC rate

Silicon node

Technology

Data

path w

idth

Clock frequency

100

Gbps45, 40nmASIC

160 bits

644 MHz100 Gbps

45, 40nm

FPGA512 bits195 MHz

400 Gbps

28, 20nmASIC

400 bits

1 GHz400 Gbps28, 20nm

FPGA

1024 bits1536 bits400 MHz267 MHz1 Tbps20, 14nm

ASIC1024 bits1 GHz1 Tbps20, 14

nmFPGA2048 bits2560 bits

488 MHz400 MHzSlide41

Multiple Packets/WordUp to 512-bit, only one packet completedJust need to deal with EOP then SOP in wordBeyond 512-bit, multiple packets completedNeed to add parallel packet processing

Must deal with varying EOP and SOP positions

Bus width

Max packets

Max EOPs

512

2

1

1024

3

2

1536

4

3512 * nn+1nSlide42

400GbE CRC ExampleAll Ethernet packets carry Cyclic Redundancy Code (CRC) for error detectionComputed using CRC-32 polynomialCritical function within Ethernet MACRequirements

Computed at line rate

Deal with multiple packets in wide data path

Economical with silicon resourcesSlide43

400GbE CRC PrototypeXilinx Labs research projectModular: built out of 512-bit 100G unitsComputes multiple CRCs per data path wordTargeting 28nm FPGA (Xilinx Virtex-7 FPGAs)

N-bit data path partitioned into 512-bit sections

512-bit unit CRC results combined to get final CRC resultsSlide44

400GbE CRC PrototypeResults:1024-bit width is feasible for 400GbEOther widths:

Less challenging clock frequencies

Demonstrate scalability beyond 400GbE

Data bus

word size

1024-bit

1536-bit

2048-bit

Max

clock frequency

(MHz)

400

381

326

Maximum l

ine rate (

Gbps

)

409

585

668

Latency (ns

)

17.5

18.4

21.5FPGA resources (slices)2,888

4,4105,719Slide45

ConclusionsCan anticipate 400GbE PCS/MAC standardEver-increasing rates mean ever-wider internal data path width in electronicsLeading to multiple packets per data wordPossible to prototype pre-standard PCS/MAC using today’s FPGA technologyDemonstrated modular Ethernet CRC block based on 100GbE units

Silicon resource scales linearly with line rate