/
Windows Server 2012 NIC Teaming and Multichannel Solutions Windows Server 2012 NIC Teaming and Multichannel Solutions

Windows Server 2012 NIC Teaming and Multichannel Solutions - PowerPoint Presentation

alexa-scheidler
alexa-scheidler . @alexa-scheidler
Follow
390 views
Uploaded On 2017-09-06

Windows Server 2012 NIC Teaming and Multichannel Solutions - PPT Presentation

Rick Claus Sr Technical Evangelist RicksterCDN httpRegularITGuycom WSV321 Agenda Reliability is job one NIC Teaming Overview Configuration choices Managing NIC Teaming Demo SMB Multichannel ID: 585778

smb nic 10gbe switch nic smb switch 10gbe team traffic teaming members server active rss client multichannel inbound nics

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Windows Server 2012 NIC Teaming and Mult..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Windows Server 2012 NIC Teaming and Multichannel Solutions

Rick ClausSr. Technical Evangelist@RicksterCDNhttp://RegularITGuy.com

WSV321Slide2

Agenda - Reliability is job one!

NIC TeamingOverviewConfiguration choicesManaging NIC Teaming

Demo

SMB Multichannel

Overview

Sample Configurations

Troubleshooting

DemoSlide3

What do NIC Teaming and SMB Multichannel have in common?

Reliability is job oneNIC Teaming provides protection against failures in the hostSMB Multichannel provides multi-path protectionsMore bandwidth is always a good thing

NIC Teaming and SMB Multichannel both provide bandwidth aggregation when possible

NIC Teaming and SMB Multichannel work together!Slide4

NIC TeamingSlide5

What is NIC Teaming?

Also known as…..NIC BondingLoad Balancing and Failover (LBFO). . . Other thingsThe combining of two or more network adapters so that the software above the team perceives them as a single adapter that incorporates failure protection and bandwidth aggregation.Slide6

. . . And?

NIC teaming solutions also provide per-VLAN interfaces for VLAN traffic segregationSlide7

Why use Microsoft’s NIC Teaming?

Vendor agnostic – anyone’s NICs can be added to the teamFully integrated with Windows Server 2012 Lets you configure your teams to meet your needsServer Manager-style UI that manages multiple servers at a time Microsoft supported – no more calls to NIC vendors for teaming support or getting told to turn off teaming

Team management is easy!Slide8

NIC teaming dismantledand vocabulary lesson

Team members

--or--

Network

Adapters

Team

Team Interfaces,

Team NICs, or

tNICsSlide9

Team connection modes

Switch independent modeDoesn’t require any configuration of a switchProtects against adjacent switch failuresSwitch dependent modesGeneric or static teaming

IEEE 802.1ax teaming

Also known as LACP or 802.3ad

Requires

configuration

of the adjacent switch

Switch dependent team

Switch independent teamSlide10

Load distribution modes

Address Hash – comes in 3 flavors4-tuple hash: (Default distribution mode) uses the RSS hash if available, otherwise hashes the TCP/UDP ports and the IP addresses. If ports not available, uses 2-tuple instead.

2-tuple hash: hashes the IP addresses.

If

not IP traffic uses MAC-address hash instead.

MAC address hash: hashes the MAC addresses.

Hyper-V port

Hashes the port number on the Hyper-V switch that the traffic is coming from. Normally this equates to per-VM traffic.Slide11

Switch/Load Interactions (Summary)

Address Hash

Hyper-V port

Switch

Independent

Sends on all active members,

receives on one member (primary member)

Sends on all active members, receives on all active members, traffic from

same port always on same NIC

Switch Dependent

Sends on all active members, receives on all active members, inbound

traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)

All outbound

traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute trafficSlide12

What modes am I using?Slide13

Switch/Load Interactions (SI/AH)

Address Hash

Hyper-V port

Switch

Independent

Sends on all active members,

receives on one member (primary member)

Sends on all active members, receives on all active members, traffic from

same port always on same NIC

Switch Dependent

Sends on all active members, receives on all active members, inbound

traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)

All outbound

traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic

Sends on all active members using the selected level of address hashing (defaults to 4-tuple hash).

Because each IP address can only be associated with a single MAC address for routing purposes, this mode receives inbound traffic on only one member (the primary member).

Best used when:

a) Native mode teaming where switch diversity is a concern;

b) Active/Standby mode

c) Servers running workloads that are heavy outbound, light inbound workloads (e.g., IIS)Slide14

Switch/Load Interactions (SI/HP)

Address Hash

Hyper-V port

Switch

Independent

Sends on all active members,

receives on one member (primary member)

Sends on all active members, receives on all active members, traffic from

same port always on same NIC

Switch Dependent

Sends on all active members, receives on all active members, inbound

traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)

All outbound

traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic

Sends on all active the hashed Hyper-V switch port. Each Hyper-V

pormembers

using t will be bandwidth limited to not more than one team member’s bandwidth.

Because each VM (Hyper-V port) is associated with a single NIC, this mode receives inbound traffic for the VM on the same NIC it sends on so all NICs receive inbound traffic. This also allows maximum use of VMQs for better performance over all.

Best used for teaming under the Hyper-V switch when

- number of VMs well-exceeds number of team members

- restriction of a VM to one NIC’s bandwidth is acceptableSlide15

Switch/Load Interactions (SD/AH)

Address Hash

Hyper-V port

Switch

Independent

Sends on all active members,

receives on one member (primary member)

Sends on all active members, receives on all active members, traffic from

same port always on same NIC

Switch Dependent

Sends on all active members, receives on all active members, inbound

traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)

All outbound

traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic

Sends on all active members using the selected level of address hashing (defaults to 4-tuple hash).

Receives on all ports. Inbound traffic is distributed by the switch. There is no association between inbound and outbound traffic.

Best used for:

- Native teaming for maximum performance and switch diversity is not required; or

- teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver Slide16

Switch/Load Interactions (SD/HP)

Address Hash

Hyper-V port

Switch

Independent

Sends on all active members,

receives on one member (primary member)

Sends on all active members, receives on all active members, traffic from

same port always on same NIC

Switch Dependent

Sends on all active members, receives on all active members, inbound

traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)

All outbound

traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic

Sends on all active members using the hashed Hyper-V switch port. Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth.

Receives on all ports. Inbound traffic is distributed by the switch. There is no association between inbound and outbound traffic.

Best used when:

- Hyper-V teaming when VMs on the switch well-exceed the number of team members

and

- when policy calls for e.g., LACP teams and when an individual VM does not need to transmit faster than one team member’s bandwidthSlide17

Team interfaces (tNICs)

Team interfaces can be in one of two modes:Default mode: passes all traffic that doesn’t match any another team interface’s VLAN idVLAN

mode: passes all traffic that matches the VLAN

Inbound traffic is always passed to at most one team interface

TEAM

VLAN=42

Default

(all but 42)

TEAM

VLAN=42

VLAN

=99

Black hole

TEAM

Default

Hyper-V switchSlide18

Team interface – at team creation

When a team is created it has one team interface. Team interfaces can be renamed like any other network adapter (Rename-NetAdapter cmdlet

)

Team interfaces show up in Get-

NetAdapter

output

Only this first (primary) team interface can be put in

Default

modeSlide19

Team Interfaces - additional

Team interfaces created after initial team creation must be VLAN mode team interfacesTeam interfaces created after initial team creation can be deleted at any time (UI or PowerShell)It is a violation of Hyper-V rules to have more than one team interface on a team that is bound to the Hyper-V switch

TEAM

Default

Hyper-V switchSlide20

Teams of one

A team with only one member (one NIC) may be created for the purpose of disambiguating VLANsA team of one has no protection against failure (of course)

TEAM

VLAN=42

VLAN

=99

VLAN=13

VLAN

=3995Slide21

Team members

Any physical Ethernet adapter can be a team member and will work as long as the NIC meets the Windows Logo requirementsTeaming of Infiniband, WiFi, WWAN, etc., adapters is not supported

Teams of teams are not supportedSlide22

Team member roles

A team member may be active or standby.Slide23

Teaming in a VM is supported

Limited to Switch independent, Address Hash modeTeams of two team members are supported Intended/optimized to support teaming of SR-IOV VFs but may be used with any interfaces in the VMRequires configuration of the Hyper-V switch or failovers may cause loss of connectivitySlide24

Manageability

Intuitive, easy-to-use NIC Teaming UISo intuitive and powerful that some Beta customers are saying they don’t want to bother with learning the PowerShell cmdletsUI operates completely through PowerShell – uses PowerShell

cmdlets

for all operations

Manages Servers (including Server Core) remotely from your Windows 8 client PC

Powerful PowerShell

cmdlets

Object:

NetLbfoTeam

(New, Get, Set, Rename, Remove)Object: NetLbfoTeamNic (Add, Get, Set, Remove)Object: NetLbfoTeamMember (Add, Get, Set, Remove)Slide25

Feature interactions

Feature

Comments

RSS

Programmed directly

by TCP/UDP when bound to TCP/UDP.

VMQ

Programmed directly

by the Hyper-V switch when bound to Hyper-V switch

IPsecTO

, LSO

, Jumbo frames

, all

checksum offloads (transmit)

Yes – advertised if all NICs

in the team support it

RSC, all

checksum offloads (receive)

Yes – advertised

if any NICs in the team support it

DCB

Yes – works independently

of NIC Teaming

RDMA,

TCP Chimney

offload

No support through

teaming

SR-IOV

Teaming in the guest allows teaming of VFs

Network virtualization

YesSlide26

Limits on NIC Teaming

Maximum number of NICs in a team: 32Maximum number of team interfaces: 32Maximum teams in a server: 32Not all maximums may be available at the same time due to other system constraintsSlide27

demo

NIC TeamingSlide28

SMB MultichannelSlide29

Multiple RDMA NICs

Multiple 1GbE NICs

Single 10GbE

RSS-capable NIC

SMB Server

SMB Client

SMB Multichannel

Multiple connections per SMB session

Full Throughput

Bandwidth aggregation with multiple NICs

Multiple CPUs cores engaged when using Receive Side Scaling (RSS)

Automatic Failover

SMB Multichannel implements end-to-end failure detection

Leverages NIC teaming if present, but does not require it

Automatic Configuration

SMB detects and uses multiple network paths

SMB Server

SMB Client

SMB Server

SMB Client

Sample Configurations

Multiple 10GbE

in a NIC team

SMB Server

SMB Client

NIC Teaming

NIC Teaming

Switch

10GbE

NIC

10GbE

NIC

10GbE

Switch

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

Switch

1GbE

NIC

1GbE

NIC

1GbE

Switch

1GbE

NIC

1GbE

NIC

1GbE

Vertical lines are logical channels, not cables

Switch

10GbE/IB

NIC

10GbE/IB

NIC

10GbE/IB

Switch

10GbE/IB

NIC

10GbE/IB

NIC

10GbE/IB

Switch

10GbE

RSS

RSSSlide30

SMB Server

SMB Client

SMB Multichannel – Single 10GbE NIC

No failover

Can’t use full 10Gbps

Only one TCP/IP connection

Only one CPU core engaged

No failover

Full 10Gbps available

Multiple TCP/IP connections

Receive Side Scaling (RSS) helps distribute load across CPU cores

1 session, with Multichannel

1 session, without Multichannel

Switch

10GbE

NIC

10GbE

NIC

10GbE

SMB Server

SMB Client

Switch

10GbE

NIC

10GbE

NIC

10GbE

CPU utilization per core

Core 1

Core 2

Core 3

Core 4

CPU utilization per core

Core 1

Core 2

Core 3

Core 4

RSS

RSS

RSS

RSSSlide31

1 session, with Multichannel

1 session, without Multichannel

SMB Multichannel – Multiple

NICs

No automatic failover

Can’t use full bandwidth

Only one NIC engaged

Only one CPU core engaged

Automatic NIC failover

Combined NIC bandwidth available

Multiple NICs engaged

Multiple CPU cores engaged

SMB Server 1

SMB Client 1

Switch

10GbE

SMB Server 2

SMB Client 2

NIC

10GbE

NIC

10GbE

Switch

10GbE

NIC

10GbE

NIC

10GbE

Switch

10GbE

Switch

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

SMB Server 1

SMB Client 1

Switch

10GbE

SMB Server 2

SMB Client 2

NIC

10GbE

NIC

10GbE

Switch

10GbE

NIC

10GbE

NIC

10GbE

Switch

10GbE

Switch

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

RSS

RSS

RSS

RSS

RSS

RSS

RSS

RSSSlide32

SMB Multichannel Performance

Preliminary results using four 10GbE NICs simultaneouslyLinear bandwidth scaling 1 NIC – 1150 MB/sec2 NICs – 2330 MB/sec3 NICs – 3320 MB/sec

4 NICs – 4300 MB/sec

Leverages NIC support for RSS (Receive Side Scaling)

Bandwidth for small IOs is bottlenecked on CPU

Data goes all the way to persistent storage.

White paper provides full details.

See

http://go.microsoft.com/fwlink/p/?LinkId=227841

Preliminary results based on

Windows Server “8” Developer PreviewSlide33

1 session, with NIC Teaming and MC

1 session, with NIC Teaming, no MC

SMB Multichannel +

NIC Teaming

Automatic NIC failover

Can’t use full bandwidth

Only one NIC engaged

Only one CPU core engaged

Automatic NIC failover (faster with NIC Teaming)

Combined NIC bandwidth available

Multiple NICs engaged

Multiple CPU cores engaged

SMB Server 1

SMB Client 1

SMB Server 2

SMB Client 2

NIC Teaming

NIC Teaming

NIC Teaming

NIC Teaming

Switch

10GbE

Switch

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

Switch

1GbE

NIC

1GbE

NIC

1GbE

Switch

1GbE

NIC

1GbE

NIC

1GbE

SMB Server 2

SMB Client 1

Switch

1GbE

SMB Server 2

SMB Client 2

NIC

1GbE

NIC

1GbE

Switch

1GbE

NIC

1GbE

NIC

1GbE

Switch

10GbE

Switch

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

NIC

10GbE

NIC Teaming

NIC Teaming

NIC Teaming

NIC TeamingSlide34

1 session, with Multichannel

1 session, without Multichannel

SMB Direct and SMB Multichannel

No automatic failover

Can’t use full bandwidth

Only one NIC engaged

RDMA capability not used

Automatic NIC failover

Combined NIC bandwidth available

Multiple NICs engaged

Multiple RDMA connections

SMB Server 2

SMB Client 2

SMB Server 1

SMB Client 1

SMB Server 2

SMB Client 2

SMB Server 1

SMB Client 1

Switch

10GbE

Switch

10GbE

R-NIC

10GbE

R-NIC

10GbE

R-NIC

10GbE

R-NIC

10GbE

Switch

54GbIB

R-NIC

54GbIB

R-NIC

54GbIB

Switch

54GbIB

R-NIC

54GbIB

R-NIC

54GbIB

Switch

10GbE

Switch

10GbE

R-NIC

10GbE

R-NIC

10GbE

R-NIC

10GbE

R-NIC

10GbE

Switch

54GbIB

R-NIC

54GbIB

R-NIC

54GbIB

Switch

54GbIB

R-NIC

54GbIB

R-NIC

54GbIBSlide35

SMB Multichannel – Not applicable

Switch

1GbE

SMB Server

SMB Client

NIC

1GbE

NIC

1GbE

Switch

1GbE

Switch

Wireless

SMB Server

SMB Client

NIC

1GbE

NIC

Wireless

NIC

1GbE

Switch

1GbE

SMB Server

SMB Client

NIC

1GbE

NIC

1GbE

Switch

10GbE

SMB Server

SMB Client

R-NIC

10GbE

R-NIC

10GbE

Switch

1GbE

SMB Server

SMB Client

NIC

1GbE

NIC

1GbE

Single NIC configurations where full bandwidth is already available without MC

Configurations with

d

ifferent NIC type or speed

SMB Server

SMB Client

Switch

Wireless

NIC

Wireless

NIC

Wireless

Switch

10GbE

NIC

10GbE

NIC

10GbE

Switch

IB

R-NIC

32GbIB

R-NIC

32GbIB

Switch

10GbE

R-NIC

10GbE

R-NIC

10GbE

RSS

RSSSlide36

SMB Multichannel Configuration Options

Throughput

Fault Tolerance

for SMB

Fault

Tolerance

for non-SMB

Lower CPU utilization

Single NIC (no RSS)

Multiple NICs (no RSS)

▲▲

Multiple NICs

(no RSS)

+ NIC Teaming

▲▲

▲▲

Single NIC

(with RSS)

Multiple

NICs (with RSS)

▲▲

Multiple

NICs (with RSS) +

NIC Teaming

▲▲

▲▲

Single

NIC (with RDMA)

Multiple NICs (with RDMA)

▲▲

Multichannel is on by default for SMB.

NIC Teaming is helpful for faster failover.

NIC Teaming is helpful for non-SMB traffic (mixed workloads, management).

NIC Teaming is not compatible with RDMA.Slide37

Troubleshooting SMB Multichannel

PowerShellGet-NetAdapterGet-SmbServerNetworkInterface

Get-

SmbClientNetworkInterface

Get-

SmbMultichannelConnection

Event Log

Application and Services Log, Microsoft, Windows, SMB Client

Performance Counters

SMB Client SharesSlide38

demo

SMB MultichannelSlide39

Some Windows Storage Resources

Virtualizing Storage for Scale, Resiliency, and Efficiency

http://go.microsoft.com/fwlink/?LinkID=254536

How to Configure Clustered Storage Spaces in Windows Server 2012

http://go.microsoft.com/fwlink/?LinkID=254538

Storage Spaces FAQ

http://go.microsoft.com/fwlink/?LinkID=254539Slide40

© 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.

The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the

part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.