/
The Power of the Windows Server Software Defined Datacenter The Power of the Windows Server Software Defined Datacenter

The Power of the Windows Server Software Defined Datacenter - PowerPoint Presentation

min-jolicoeur
min-jolicoeur . @min-jolicoeur
Follow
430 views
Uploaded On 2016-07-29

The Power of the Windows Server Software Defined Datacenter - PPT Presentation

Philip Moss Managing Partner IT NTTX BRK2469 Domain Controllers DNS Internal Public Exchange SharePoint Lync SQL WDS File Servers AppV UEV RDSH VDI DPM DHCP MDM Bespoke Client Line of Business Applications ID: 423658

storage hyper network server hyper storage server network nic windows host virtual support software scale networking cluster pnic data

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "The Power of the Windows Server Software..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1
Slide2

The Power of the Windows Server Software Defined Datacenter in Action

Philip MossManaging Partner IT - NTTX

BRK2469Slide3

Domain Controllers

DNSInternalPublicExchange

SharePoint

LyncSQLWDS

File ServersApp-VUE-VRDSHVDIDPMDHCPMDMBespoke Client Line of Business Applications

Todays workloadsSlide4

Engineering goalsSlide5

Engineering goals

Support for multiple diverse workloadsFull end-to-end high-availability

100% virtualisation

100% automationSub-system scale-outStorage

NetworkingComputeCost to serve reductionRemoval of middlewareHardware platform agnosticUse of commodity hardwareJust in time hardware provisioningSlide6

ArchitectureSlide7

Architecture – software defined datacentre

Storage

Networking

Compute

SOFS and Storage spaces

SMB 3.0 and Software defined networking

Hyper-V clustering, HNV

Core Platform

AD, DNS, DHCP, WSUS

Services

RDS, VDI, DPM

Productivity Applications

Exchange, SharePoint, LyncSlide8

Storage

Storage

SOFS and Storage spaces

Networking

Compute

SMB 3.0 and Software defined networking

Hyper-V clustering

Core Platform

AD, DNS, DHCP, WSUS

Services

RDS, VDI, DPM

Productivity Applications

Exchange, SharePoint, LyncSlide9

Why software defined storage

Deliver a high performance and scalable delivery platform for virtual machine virtual hard disks using commodity hardware

Mitigate the use of dedicated hardware solutions

SAN’sDirect attached hardware RAID

Use a common, industry standard data transportImplement storage optimisation and management within softwareDrive down deployment and operational costSlide10

Data Delivery – Scale Out File Server

Scale Out File ServerStorage Spaces - Windows

Server as

the storage controllerSMB 3 as data transportReplaces iSCSI and Fibre Channel

Cheap generic JBOD’sMulti-point highly availableContinuous availabilityFull scale outRemoves requirement for SANSlide11

Spaces 2012 R2 Tiring

Speed up data read and write

Introduced in 2012 R2

SSD layer used for high-IO dataData moved to SSD via “heat” logic

1MB data chunks – not all of a large file needs to fit on SSDGained write back-cachePinning allows files to be locked onto SSD layerInteroperation with the CSV cacheHeat does not work with CSV cacheDoes not work with redirected IOPlanning considerationsA Space using tiring without CSV, could be slower than a no-tired Space using CSVCan still pin files to SSDSlide12

Spaces 2012 R2

Write back cache on SSDDramatically increase write performance

Use 1GB

It is possible to set it higher; do not do thisDynamic rebuild using spare capacity

No longer a requirement for dedicated hot-spaceSimply leave unallocated headroom in disk pool Slide13

Storage Spaces – design considerations

Data integrity

2 way mirror provides only limited disk failure protection

Suitable solution if using application HA3 way mirror gives a good level of disk failure

toleranceVery costly is disk usage (66% raw capacity loss)Party Space now supported for clusterPerformance is not goodEnclosure awarenessProvides protection against entire JBOD failureSetup considerations

3 JBOD’s for 2 way mirror, single enclosure failure3 JBOD’s for 3 way mirror, single enclosure failure5 JBOD’s for 3 way mirror, duel enclosure failureSlide14

Storage Spaces – 2012 R2 design c

onsiderationsLarger column counts are important

Column count defines how many disks are written across for any given write operation

Read operations use all copies of data, therefore significant performance increaseC

olum count 4, 2 way mirror; read = 8 disk performanceColum count 4, 3 way mirror; read = 12 disk performancePotential latency issues when using column counts of over 4Column count shared between SSD and HDDSSD’s can become limiting factorLarger pools are more efficientMaximum pool size is 240 disks

Increases disk failure planning complexityDo not exceed more than 80 disks per poolSlide15

De-Duplication

Supported for VDI and DPM workloadsTiring key for de-dup deployment

De-

dupped data (the chuck store IO will be massive)Chunk store cannot be pinned

Use heat logic built into tieringCPU and RAM ConsiderationsCan now run on hot (open) VHDx files, consumes resourcesPer volume, therefore planning required so that CPU and RAM are not exhorted.Slide16

Design Considerations - SoFS nodes

SMB client connection redirection

Reduces load / requirements for CSV network

Applies to 2012 R2 / Win 8.1 and laterIncoming connection “moved” to the node that owns the storage

Carful planning required if SoFS is to be used with a DFS namespaceIncreased RAM and CPU overhead Requirement driven by heat and de-dup overheads2012; single physical proc and 12GB of RAM was fine2012 R2, duel CPU and 128GB plus of RAM

NetworkingPlan for SMB multi-channel

If using RDMA no teaming options

As

SoFS

is a clustered solution;

separate IP required for each NIC interface

Planning considerations on

Hyper-V

hosts

LACP is an option, however potential challenges

Distribution hash settings on

switches10TB is the maximum recommended volume size.Slide17

Storage

Networking

Compute

SOFS and Storage spaces

SMB 3.0 and Software defined networking

Hyper-V clustering

Core Platform

AD, DNS, DHCP, WSUS

Services

RDS, VDI, DPM

Productivity Applications

Exchange, SharePoint, Lync

NetworkSlide18

Why software defined networking

Simplification of network physical topology

Utilisation

of commodity switching and cablingReduction in NIC port and switch / core port requirementsRemoval

of dedicated hardwareNetwork “appliance” activities moved to virtual machines or software rolesNetwork performance optimisation and management performed within softwareNetwork isolation and segmentation performed in softwareSlide19

Software defined networking 101

Decoupled data delivery – VHDx

via standard protocol

SMB 3.0Physical load-balancing and failoverTeaming

(switch agnostic)Load aggregation and balancingSMB multi-channel Commodity L2 switchingCost effective networking (Ethernet)RJ45SFP+QSFP+

Quality of ServiceMultiple-levelsHyper-V host workload overhead reduction

RDMA

Easily ScaleSlide20

Switch Agnostic NIC Teaming

Integrated Solution for Network Card Resiliency and load balancingVendor agnostic and shipped inbox

Enables

teams of up to 32 NICsAggregates bandwidth from multiple network adapters whilst providing traffic failover in the event of NIC outageIncludes multiple nodes: switch dependent and independent

Multiple traffic distribution algorithms: Hyper-V Switch Port, Hashing and Dynamic Load Balancing

NIC Teaming

Physical

Network

adaptors

Team network adapter

Team network adapter

Operating systemSlide21

RDMA Considerations

Why RDMAGreatly reduces overhead of CPU in Hyper-V and scale out file server hosts

Allows more resources for running VM’s

Improved data transport speedTwo main Ethernet based options

RoCEiWARPPrimary vendor optionsChelsioMellonoxComparisoniWARP

RoutableRoCERequires DCB (Data Centre Bridging)Confirm support is available from your switching platform

Which is better?

Pro’s and cons to both

Operational throughput

Setup complexity

Deployment scenario requirements; is routing between subnet’s requires,

etc

Infiniband

an

option

If you have investment is IB an excellent route to takeSlide22

Now is the time for RDMA:

Hardware

is available

Software is mature

Vendors are on-boardSlide23

To Be Or, Not To Be (Converged) – That is that question

Converged networking = Big Win’s

Reduces complexity and cost

Increases flexiblyFully convergedSingle network, no dedicated service networks

Use Windows networking capability to define systemNIC TeamingHyper-V vSwitch QoSSMB QoSUniversal vSwitch binding

Parent loopback for SMB data to hostGain complete control through QoS

Excellent resource utilization, managing networking resources between workloads

SMB 3.0

VM traffic

Live Migration

Semi-Converged

Dedicated NIC’s for SMB 3.0

Dedicated (teamed) NIC’s for VM traffic

Critical for RDMA deployments in 2012 R2

RDMA does not work via

vSwitch

No teaming supportSlide24

Parent OS

Switch agnostic team

Digging deeper - fully converged

pNIC

pNIC

pNIC

pNIC

vSwitch

VM

VM NIC

VM

VM NIC

VM

VM NIC

VM NIC

VM NIC

VM NIC

VM NIC

QoSSlide25

Parent OS

Switch agnostic team

Digging deeper - semi converged

pNIC

pNIC

pNIC

pNIC

vSwitch

VM

VM NIC

VM

VM NIC

VM

VM NIC

QoS

RDMASlide26

Gen 1; 1Gbps using multiple connections

Very cheap NIC’s and switch portsAttractive as teaming / SMB multi channel in Windows made this viableCabling nightmares

Gen 2; 10Gbps using multiple connections

Cost viable due to NIC and port cost reductionsSignificant throughput achievable with 4 connection in each server

Cabling challenges remainDeployment over very cost effective RJ45Gen 3; 40GbpsNIC and port costs are still high, available speed makes tradeoff acceptableRequires QSFP+Expensive cables and transceiversPerformance from only 2 ports very highCabling issues mitigatedAvoids the requirement for VM teaming, mitigates many vRSS

challengesvRSS places significant overhead on hostMakes very high performance VM’s simpler to deploy with increased flexibility

Network speed considerations

Not viable for next generation solutionsSlide27

Compute

Storage

Networking

Compute

SOFS and Storage spaces

SMB 3.0 and Software defined networking

Hyper-V

Core Platform

AD, DNS, DHCP, WSUS

Services

RDS, VDI, DPM

Productivity Applications

Exchange, SharePoint, LyncSlide28

Why software defined compute

“Virtualize everything” provides enormous system benefits in terms of flexibility and scale

System portability

High-availabilityDRMigration and upgrades

Manage quality of service and system “stress point” situations within softwareEasily swop between scale up or scale outWithout changes in hardwareAchieve segmentation and resource isolation without investment in dedicated hardwareSlide29

Hyper-V 2012 R2 – the basic’s

Hyper-V

63 node clusters

8000 VM limit preventing clusters approaching this numberModern hardware allows for huge VM count for node

SMB 3.0 supportDynamic RAMvGPU supportInter-cluster version live migrationKey for 2012 to 2012 R2 migrationsLive migration compressionSMB prioritisationSlide30

De-Coupled data delivery

SMB 3.0 supportIntroduced in Server 2012

Access

VHDx over file; not block storage\\servername\sharename

Replaces iSCSI or FC solutionsSimplifies Hyper-V solution designScale out / scale-up require less plumbingSupported from multiple vendors:MS scale out file serverSAN providersSlide31

Hyper-V 2012 R2

Generation 2 VM’sUEFI basedSecure boot support

WDS support without using legacy NIC

No support for IDE VHDxDynamic

VHDx resizeEnables dynamic increase or decrease in VHDx size without taking VM offlineKey feature for IaaS clientsDynamic quorum selectionIntroduced in 2012 R2Very useful for clusters that grow over timeSlide32

Networking – Hyper-V

vRSS support new available on

vNIC’s

Address limitations of vNIC being limited to 1 CPU core and therefore maxing out

Allows for very high performance VM’svRSS puts significant load on host CPUNo vRSS to parent; not viable to drive high network bandwidth into parent.New teaming algorithm; DynamicCombines Hyper-V port with address hashRecommended setting

VM based NIC teamingDriving vNIC’s at

above the

wire speed of physical host NIC is very difficult

Avoid the requirement for teaming though the use of higher speed physical NIC’s

SRIOV – Choices and trade offs

Key for low latency / high-performance VM workloads

Limits VM deployment options as requires host with dedicated spare NIC’s

Dedicated NIC requirements increase if VM requires HA NIC capability

For a service provider, this level of rigidity creates considerable challengesSlide33

Quality of Service – Hyper-V

Storage QoS

Define storage IOPS limits on a per VM bases.

v-Switch is your friendDefine

QoS behaviours on a per VM basesIf using parent loopback, define QoS to control SMB trafficQoS is only on outbound connectionsQoS helps you deal with “noisy neighbour” syndromeNo solution for CPU or RAM challenges todaySMB contains it’s own channel prioritisation logicSlide34

ServicesSlide35

Virtual network isolation

Software based network isolationTenant “bring your own subnet”

Introduced in 2012

Solves the requirement to do vLAN tagging

Mitigates the 4096 vLAN’s celling Multi-Tenant Site-to-Site gateway : 2012 R2Physical server

Physical network

Green virtual

machine

Purple virtual

machine

Purple network

Green network

VIRTUALIZATIONSlide36

Network virtualization gateway

Bridge Between VM Networks and Physical

Networks

Multi-tenant VPN gateway built-in to Windows Server 2012 R2

Integral multitenant edge gateway for seamless connectivity

Guest clustering for high availability

BGP for dynamic routes update

Multitenant

aware NAT for

Internet

access

IPSEC VPN – 400Mbps

GRE – 2.4Gbps

Contoso

Fabrikam

Resilient

HNV

Gateway

Resilient HNV

Gateway

Internet

Resilient

HNV

Gateway

Service Provider

Hyper-V Host

Hyper-V HostSlide37

Remote Desktop Services

Windows Server 2012

Full VDI support

Increased RDSH performance

High-availability brokerFull automation supportSignificant improves in audio and video capabilitiesHardware graphics accelerationSlide38

Remote desktop user experience

RDP 8.1UDP SupportvGPU

DirectX 11.1

Audio / VideoTouch Remoting

USB bus redirectionTouch and audio / video performance improvementsImproved “region” detection and codec’sConnection / Reconnection performance improvementsSignificant improvements in user experience when using remote appScreen / resolution dynamic resizePlug into an external monitor or change resolution and the connection automatically resizes

1st party clients available for multiple platforms

Windows

Windows Store

Mac

iOS

AndroidSlide39

Demo – The stack in action

Philip MossSlide40

End-to-end stack

A user personal virtual desktop

Running on clustered Hyper-V

VHDx over SMB 3.0

Using a Storage spaces - SoFSAdvanced graphics driven by vGPUServices: Exchange, SharePoint, LyncDelivered from VM’sRunning on clustered Hyper-vVHDX over SMB 3.0100% converged networking; with RDMASecurely accessed via Remote Desktop Services

Over the Internet from the UKAnimation, video, 3D driven by RDP 8.1

Storage Spaces

Scale out file server

SMB 3.0

Hyper-V Cluster

HA VM File Server

VM – Windows Client

Virtual GPU

Exchange

Lync

SharePoint

Remote Desktop ServicesSlide41

Summing Up – Part 1Slide42

Summary

How to use the Microsoft stack to build a software defined datacentre

Deploy a fault-

tollorant, highly-available platform using commodity hardwareStorage – Scale out file server

Network – SMB 3.0 and software defined optimizationCompute – Hyper-V clustering and Hyper-V features to optimize virtual machine performanceDrive down operational costStandardize on a single set of management and automation technologiesPowershell

Deliver multi-tenant / isolation solutionsHyper-V network

virtualization

Take advantage of new generation Remote Desktop

Services

P

rovide

immersive and flexible virtual desktop

solutionsSlide43

(Brief) questionsSlide44

Building on the coreSlide45

Architecture – software defined datacentre

Storage

Networking

Compute

SOFS and Storage spaces

SMB 3.0 and Software defined networking

Hyper-V clustering

Core Platform

AD, DNS, DHCP, WSUS

Services

RDS, VDI, DPM

Productivity Applications

Exchange, SharePoint, Lync

Highly-available

Business continuitySlide46

HA and DR - Keeping the lights onSlide47

Your high-availability arsenalSlide48

ComputeSlide49

VM based clusters using shared VHDx

Introduced in 2012 R2

Enables 100%

VHDx based VM storage solutionRemoves reliance on synthetic iSCSI or FC for shared storage in VM

Primary workloadsHA file-serversLegacy SQL serversBespoke line of business application requiring shared diskConsiderationsNo support for Hyper-V replicaStretch clusters cannot be created

Hyper

-

V Cluster

VM Cluster

Scale Out File Server

(

Continuously Available

)

VM A

VM B

VHDx

(

VM A

)

VHDx

(

VM B

)

Shared

HVDx

(

cluster shared

storage

)Slide50

Preventing all eggs in one (host) basket – VM affinity

Affinity controls and defines which VM’s may co-exist on a single host

Prevents two related VM’s or VM’s that must not fail at the same time from being on the same host

Hyper-V cluster –

w

ithout affinity setup

Hyper-V host A

Hyper-V host B

Hyper-V host C

VM B – VM Cluster 1

VM B – VM Cluster 2

VM A – VM Cluster 1

VM A – VM Cluster 2Slide51

Hyper-V cluster – with affinity setup

P

reventing all eggs in one (host) basket – VM affinity

Affinity controls and defines which VM’s may co-exist on a single host

Prevents two related VM’s or VM’s that must not fail at the same time from being on the same host

Hyper-V host A

Hyper-V host B

Hyper-V host C

VM B – VM Cluster 1

VM B – VM Cluster 2

VM A – VM Cluster 1

VM A – VM Cluster 1Slide52

Cluster aware updating – your HA secrete weapon

Greatly simplifies updating clusters

Removes the requirement for manual drain stop / VM migrations

Drain stop’s hosts in turn and migrates workloads to other nodesAffinity rules are maintained

Affinity rules are invoked during drain stopRules can be soft or hardIf hard rules cannot be complied with, prioritisation is appliedMay be used for all cluster workloadsHyper-VSoFSSlide53

VM high-availability NIC’s

No requirement for multiple vNIC’s in VM

vSwtich

takes care of vNIC to pNIC

mapping / failoverOnly required for meet performance goalsAdditional consideration should be applied to this configurationSRIOV considerationsKey for high-performance and low latency applicationsDirect 1 to 1 mapping of pNIC

to vNICNo inherent failover on

vNIC

is

pNIC

fails

When using SRIOV multiple NIC’s must be exposed to VM

Setup VM based NIC team

As SRIOV;

pNIC’s

on host will be dedicated to the VM’s use

Creates potential load and

pNIC utilisation challenges on hostConsider using a non-SRIOV NIC as second vNICProvides fault tolerancePartially mitigates pNIC usage issuesNon-SRIOV vNIC will automatically get moved to a working pNIC by

vSwtichPerformance degradation will occurSlide54

Hyper-V Replica (HVR)Slide55

Hyper-V Replica Overview

Simple

Affordable

Flexible

Inbox replication

Application agnostic

Storage agnosticSlide56

Hyper-V Replica

Hyper-V host to hyper-V host VM replication solution

Inbox failover and DR solution for VM’s

Supports point in time replication of VM’sPlanned and unplanned failover support

Off network target supportRemote network IP injection (into vNIC)2012 R2 ImprovementsReduction in IO overheadSupport for tertiary replication locationChoice of replication intervalIO overhead increases with replication interval frequencyMulti-VM applications are potentially complex to manageSlide57

Azure Site Recovery (ASR)Slide58

Azure Site Recovery Overview

Azure Service for managing cross-site protection & recovery

Multi VM replication and failover solution, including automation and

runbooksSimple

, at scale, configuration of VM protection Reliable cloud based recovery plans Consistent user experience for remote managementExtensible from ground upSlide59

ASR Deployment Options

On-

prem

Hyper-V hosts

On-

prem

Hyper-V hosts

On-

prem

Hyper-V hosts

On-

Prem

to On-

Prem

SC VMM required at all locations

Direct routable access between each site (to allow HVR to replicate)

Secondary and territory replication targets supported

Recovery plans managed by yourself

Failover managed by yourselfSlide60

ASR Deployment Options

On-

prem

Hyper-V hosts

Azure

On-

Prem

to Azure

ASR plug-in installed in all Hyper-V hosts to allow replication to and from Azure

Recovery plans managed by yourself

Failover managed by

yourselfSlide61

ASR Deployment Options

On-

prem

Hyper-V hosts

Service provider

On-

Prem

to validated service provider

Publishing of Hyper-V hosts required to allow replication

Recovery plans managed by

service provider

Failover managed by

service providerSlide62

ASR deployment option considerations

On-

prem

to on-

prem

On-

prem

to Azure

On-

prem

to service provider

Great

option if you already have more than one location (DC)

Low cost – primary costs are MS ASR fee

Potentially complex creation of recovery plans and failover process

Great if you do not have a second location (DC)

Very

simple initial setup and maintenance

Costs are lower than many other in market DR solutions

Potential data sovereignty considerations (if no Azure DC’s are in region)

Potentially complex creation of recovery plans and failover process

Great if you do not have a second location (DC)

Potentially complex

setup

Excellent solution for meeting regulatory or data sovereignty requirementsFully managed experience, no recovery plan creation or failover planning requiredSlide63

Understanding ASRSlide64

Planning

Registration

Capacity Planning

Pre-reqs

Configure

Cloud Configure

Networks

Storage

Protect

Identify Candidate Apps

Enable Protection

Recovery Plans

Monitor

Jobs

Resources

Recovery

Drill – DR testing

Planned Failover

Unplanned Failover

ASR workflowSlide65

Summing Up – Part 2Slide66

Summary

Making things availableHow to create highly available VM’s

Clustered VM via shared

VHDxUsing Cluster Aware Updating to simplify Hyper-V cluster management and maintenance

Use Hyper-V host affinity to prevent all eggs in one basketStaying calm when the lights go outProviding DR and failover solutionsHyper-V replicaAzure Site RecoverySlide67

(Brief) questionsSlide68

Windows Server 2016 – Evolution of the software defined DCSlide69

Take the power capability of Windows Server 2012 R2

Drive down costs

Reduce complexity and simplify management

Gain valuable new servicesSlide70

Storage – Windows Server 2016Slide71

Evolution of Scale out file server

Storage Spaces DirectDirect attached, instead of shared disk

Reduced disk costs

Less costly SATA SSDReduced required for SSD disksIn place upgrade of shared SAS

SoFSNo migration path from shared SAS to storage spaces direct SoFSSlide72

Storage spaces direct

Shared SAS scale out file server

Windows Server nodes

Storage JBOD’sSlide73

Storage spaces direct

Storage Spaces Direct scale out file server

Windows Server nodes

SMB 3.0

Storage JBOD’sSlide74

Storage spaces direct

Storage Spaces Direct scale out file server

Windows Server and storage inside the nodes

SMB 3.0Slide75

Storage Replica - Synchronous data replication between

floors, buildings, campuses, cities...Slide76

Storage replica

Replication

Block-level, v

olume-based

Synchronous & asynchronous

SMB 3.1.1 transport

Flexibility

Any Windows

data volume

Any fixed disk storage

Any storage fabric

Management

Failover Cluster Manager

Windows PowerShell

WMI

End to end MS Storage StackSlide77

Storage replica

Volume to volume replication solutionBlock not file level – Not DFSR

Volume agnostic

Supports Sync and A-Sync replicationLatency and bandwidth requirements affect Sync capability

Destination volume is always dismountedNot read-write or read-only destinationOne to oneNo A-B-C, A-B+A-C or one-to-manyYou can still use other replication to add legs (E.g. Hyper-V Replica for A-B, SR for A-C)SR is a great replication solution for Azure Site RecoverySlide78

Storage replica – scenario’s

Stretch Cluster

Cluster to cluster

Two separate servers

Single server – volume to volumeSlide79

Converged or disaggregated storage?

Should storage and compute be separated to together?Slide80

Data storage

Hyper visor hosts

Desegregated

Converged

Data storage and hyper visor in one systemSlide81

Disaggregated

Allows compute and storage to scale

independently

Removes

bottleneck of storage on a specific hyper-visorDrives down operational costs and scale-out increasesSlide82

Networking – Windows Server 2016Slide83

Physical networking – Windows Server 2016Slide84

RDMA support in a fully converged deploymentSlide85

Parent OS

Switch agnostic team

RDMA support - today

pNIC

pNIC

pNIC

pNIC

vSwitch

VM

VM NIC

VM

VM NIC

VM

VM NIC

QoS

RDMA

RDMA requires dedicated NIC’sSlide86

Parent OS

Switch agnostic team

RDMA support in Server 2016 – full converged

pNIC

pNIC

pNIC

pNIC

vSwitch

VM

VM NIC

VM

VM NIC

VM

VM NIC

v

NIC

RDMA

v

NIC

RDAM

v

NIC

RDMA

v

NIC

RDMA

QoSSlide87

Virtual n

etworking – Windows Server 2016Slide88

Virtual networking

Network controllerCentralized policy management

Service chaining

Support for virtual appliancesFirewallsScalable software load balancer

Replaces NLBFull scale out and distributed topologySlide89

Understanding the network controller

Bare Metal Compute

Windows Server Hyper-V

Virtual Switch

Virtual Networks

Distributed Router

Unified Edge

Software Load Balancing

Service Chaining

Distributed Firewall

Converged

Nic

with RDMA

Switching

Routing

Firewalling

Load balancing

VPN

Physical Network

Physical Network Devices

Microsoft Network Controller

PolicySlide90

Proven with

Azure—scale out

to many Multiplexer (MUX) instances

High-throughput between MUX and virtual networks

Reduced capex through

multi-tenancy

Access to physical network resources from tenant virtual network

Centralized control and management through Network

Controller

Easy fabric deployment through

SCVMM

Integration with existing tenant portals via Network

Controller—REST

APIs or PowerShell

Scalable

and

available

Flexible

and

integrated

Easy management

Software Load Balancer (SLB): overview

Network

C

ontroller

Blue virtual network

Purple virtual network

Green virtual network

SLB MUX

SLB MUX

Edge routing infrastructureSlide91

Service Chaining

Gateway

VM

192.168.0.2

Problem:

Tenant dependencies on 3

rd

party appliances.

How to integrate them into Microsoft’s SDN platform?

Solution:

Enable tenants to bring any virtualized network function to their virtual networks

No

changes needed in Virtual Appliance

All major OS supported – Linux, BSD, and Windows

Policy-based

ordering; support

of pre-defined groups

Easy

management

through SCVMM and Windows AzureSlide92

Hyper-V

Hyper-V

Hyper-V

Service Chaining

Gateway

3

rd

party Antivirus

VM

SourceIP=Any

DestinationIP=192.168.0.0/24

Protocol=Any

SourcePort=Any

DestinationPort=Any

Element1=“3

rd

Party Antivirus VM”

Virtual Network=“MyNetwork”

+

+

Rule

Service Chain

Group

Network Controller

VM

192.168.0.2Slide93

Datacenter

Firewall

Problem:

East/West traffic security

Flexibility and SDN integration

Solution:

Multi-Tenant Datacenter Firewall Service

Protect your workloads with

Dynamic

Firewall

Policy

Group your workloads with

Network Security

Groups

Hybrid cloud consistency with

Azure ACLs

Easy

management

through SCVMM and Windows AzureSlide94

Compute – Windows Server 2016Slide95

Hyper-V

In place cluster upgradeRemoves the requirement for the traditional drain / evict workflow

Mixed version clusters are supported

Allows VM’s to live migrate and failover between Hyper-V host versionsLoss of data path handling

Improved behavior of VHDx and configuration file lossHyper-V does not get confused over loss of configuration fileMakes recovery easier after a storage failure or short period of “availability wobble”Intelligent storage QoSManaged as a property of the VMApplied to the

SoFSMigrates with the VM as it moves from host to hostSupport for

vRSS

vNIC

into parent

Allows for much faster NIC speeds into parent via

vSwitchSlide96

Shielded VMSlide97

What is a ‘Shielded VM’?

“A shielded VM is one that is protected from fabric-admins through

virtualization based security

and various cryptographic technologies.”Slide98

…Fixes the “we need to trust our fabric admin or

hoster problem…..Slide99

A bit more specifically…

What is it and who’s it for?

A few highlights

As a hoster:

“I can protect my tenants VMs + their data from host administrators.”

As a tenant:

“I can run my workloads in the cloud while meeting regulatory/compliance requirements.”

As an enterprise:

“I can enforce strong separation of duties between Hyper-V administrators and sensitive VM-workloads

.”

Hardware-rooted technologies that strictly isolate the VM-guest operating system from host administrators

A Host Guardian Service that is able to identify legitimate

Hyper-V hosts and certify them to run a given shielded VM

Virtualized Trusted Platform Module (

vTPM

) support for generation 2 virtual machinesSlide100

Nano ServerSlide101

Nano Server

Minimum-footprint infrastructure OS and application

OS

New Windows Server

installation option ‘Cloud-first’ refactoringEssential infrastructure OS requirements Essential application OS requirementsServer roles and features enabledHyper-V clustering, Storage

Next-gen application platform, including run-time

Windows

Server Containers

Hyper-V

Containers

Nano Server

Server Core

Minimal Server

Interface

GUI

Shell

Windows Server 2016Slide102

Powers modern cloud infrastructure

Faster time to value – order of magnitude quicker deployment and start up time

Enhanced productivity & lower downtime with much lower servicing footprint

Enhanced protection with significantly lower attack surface

Breakthrough efficiency with much lower resource consumption

 

Optimized for next-gen distributed applications

Higher density and performance for container-based apps and micro-services

Supports next-gen distributed app development frameworks

Can interoperate with existing server applications (e.g., app front end running on Nano Server can work with SQL DB running

on Server Core)

Nano

Server Slide103

Understanding containers

A new approach to build, ship, deploy, and instantiate applications

Physical

Virtual

}

}

Apps traditionally tied to physical server

New apps required new servers for resource isolation

Higher consolidation ratios and better server utilization

High app compatibility

Physical/Virtual

}

Benefits

Enable modern app patterns

Empower dev-ops collaboration

Agility with resource-control

c

ontainers

Package and run apps withinSlide104

Why Containers?

Developers

‘Write-once, run-anywhere’ portability

Composable

, lightweight micro-services deployed as

IaaS

|

PaaS

Rapid scale-up and scale-down

Operations

Enhances familiar IT deployment models

Flexible levels of isolation

Higher compute density

DevOps

Agility/ productivity for developers

Flexibility and control for ITSlide105

Services – Windows Server 2016Slide106

Remote desktop services

vGPU enhancements - OpenGL and

OpenCL

supportProvides support for a broad range of new graphics and compute applicationsDedicated

vRAM allocationImportant for certain application compatibilityTested against leading industry applicationsAdobeAutodeckSchlumbergerSupport for vGPU in Gen 2 VM’sSupport for Windows Server as VDI OS

Works correctly with RD BrokerEnhanced “client like” end user experience

Critical for

hosters

Support for

vGPU

in Server OS

Allows

vGPU

to be deployed in or more scenarios’Slide107

Policy based DNS

Support different DNS zone files depending on incoming connection criteriaTime of day

Location

Load / performance of internal systemsTraffic directed on your network front-side

Prevents load entering your networkRemoves certain requirements to use load-balancersGreat solution for load-balancing between on-prem and cloud based resources

Move load based on time of day to meet peak and troth requirements

Send connections to the

datacentre

nearest to them

Balance incoming connection based on internal system load

Redirect connections to another during failover or system maintenanceSlide108

Hardware planningSlide109

Hardware planning

StorageLook for vendor approved solutions

Applies to shared SAS and Storage Spaces Direct

Use 4K disksSSD – Enterprise grade, high IO, read / write balanced

NetworkingNetwork speed – 10Gbps minimumConsider 40Gbps for storageSource RDMA NIC’sNetwork virtualisation offload on NIC’sNVGREVXLANComputeMake sure all systems have SLAT support on CPUTPM 2.0 supportSlide110

Demo – VDI in Windows Server 2016

Philip MossSlide111

Where to find me

Microsoft MSE boothsHyper-V

Storage

CPSPhilip.moss@nttxselect.com

BRK3503 - Best Practices for Deploying Disaster Recovery Services with Microsoft Azure Site RecoverySlide112

QuestionsSlide113

Learn more

with FREE

IT Pro Resources

Free technical training resources:

On-demand online training:

http://aka.ms/moderninfrastructure

Expand your

Modern

Infrastructure Knowledge

Free

ebooks

:

Deploying Hyper-V with Software-Defined

Storage

& Networking:

http://aka.ms/deployinghyperv

Microsoft System Center: Integrated Cloud Platform:

http://aka.ms/cloud-platform-ebook

Join the IT Pro community:

Twitter

@

MS_ITPro

Get hands-on: Free virtual labs:

Microsoft Virtualization with Windows Server

and

System Center:

http://aka.ms/virtualization-lab

Windows Azure Pack: Install and Configure:

http://aka.ms/wap-lab Slide114

Visit

Myignite

at

http://myignite.microsoft.com

or download and use the

Ignite

Mobile

App

with

the QR code above.

Please evaluate this session

Your feedback is important to us!Slide115