/
Architecting a Modern  Datacenter Architecting a Modern  Datacenter

Architecting a Modern Datacenter - PowerPoint Presentation

aaron
aaron . @aaron
Follow
362 views
Uploaded On 2018-02-26

Architecting a Modern Datacenter - PPT Presentation

Windows Server 2012 R2 EndtoEnd Design Philip Moss Managing Partner IT NTTX CDPB362 Journey Platform Building out a software defined fabric Disaster Recovery Making DR work in the realworld ID: 637068

storage 2012 network hyper 2012 storage hyper network nic microsoft server data windows scale support file high vhdx smb

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Architecting a Modern Datacenter" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1
Slide2

Architecting a Modern Datacenter: Windows Server 2012 R2 End-to-End Design

Philip Moss, Managing Partner IT, NTTX

CDP-B362Slide3

Journey

Platform

Building out a software defined fabric

Disaster Recovery

Making DR work in the real-world

High-Availability

Building an end-to-end HA solution

Software defined platformSlide4

Delivering the software defined datacentreSlide5

…otherwise known as: Windows Server 2012 R2 in 60 minutes…ish…Slide6

Business driversSlide7

Increase margin

Drive down operational costs

Deliver compelling and dynamic servicesSlide8

Diverse workloads

Domain ControllersDNSInternal

Public

Exchange

SharePointLyncSQLWDSFile ServersApp-VUE-V

RDSHVDIDPMDHCPBespoke Client Line of Business ApplicationsSlide9

Engineering goalsSlide10

Engineering goals

Support for multiple diverse workloadsFull end-to-end high-availability

100% virtualisation

100% automation

Sub-system scale-outStorageNetworkingComputeCost to serve reductionRemoval of middlewareHardware platform

agnosticUse of commodity hardwareJust in time hardware provisioningSlide11

ArchitectureSlide12

Logical architecture

Storage

Networking

ComputeSlide13

Datacentre topologySlide14

StorageSlide15

Data Delivery – Scale Out File Server

Scale Out File Server

Storage Spaces - Windows

Server as

the storage controllerSMB 3 as data transportReplaces iSCSI and Fibre ChannelCheap generic JBOD’sMulti-point highly available

Continuous availabilityFull scale outRemoves requirement for SANSlide16

Introduced in 2012 R2SSD layer used for high-IO data

Data moved to SSD via “heat” logic1MB data chunks – not all of a large file needs to fit on SSD

Gained write back-cache

Pinning allows files to be locked onto SSD layer

Interoperation with the CSV cacheHeat does not work with CSVDoes not work with redirected IOPlanning considerationsA Space using tiring without CSV, could be slower than a no-tired Space using CSVCan still pin files to SSD

Spaces 2012 R2 TiringSlide17

Write back cache on SSDDramatically increase write

perfUse 1GBIt is possible to set it higher, do not do this

Dynamic rebuild using spare capacity

No longer a requirement for dedicated hot-space

Simply leave unallocated headroom in disk pool Spaces 2012 R2Slide18

Data integrity2 way mirror provides only limited disk failure protection

Suitable solution if using application HA3 way mirror gives a good level of disk failure

tolerance

Very costly is disk usage (66% raw capacity loss)

Party Space now supported for clusterPerformance is not goodEnclosure awarenessProvides protection against entire JBOD failureSetup considerations3 JBOD’s for 2 way mirror, single enclosure failure

3 JBOD’s for 3 way mirror, single enclosure failure5 JBOD’s for 3 way mirror, duel enclosure failureStorage Spaces – design considerationsSlide19

Larger column counts are importantColumn count defines how many disks are written across for any given write operation

Read operations use all copies of data, therefore significant performance increaseColum count 4, 2 way mirror; read = 8 disk performance

Colum count 4,

3

way mirror; read = 12 disk performancePotential latency issues when using column counts of over 4Column count shared between SSD and HDDSSD’s can become limiting factorLarger pools are more efficientIncreases disk failure planning complexity

Do not exceed more than 80 disks per poolStorage Spaces – 2012 R2 design considerationsSlide20

De-Duplication

Supported for VDI and DPM workloadsTiring key to de-dup deployment

De-

dupped

data (the chuck store IO will be massive)Chunk store cannot be pinnedUse heatCPU and RAM ConsiderationsCan now run on hot (open) VHDx files, consumes resourcesPer volume, therefore planning required so that CPU and RAM are not exhorted.Slide21

Design Considerations - SoFS nodes

SMB client connection redirection

Reduces load / requirements for CSV network

Applies to 2012 R2 / Win 8.1 and later

Incoming connection “moved” to the node that owns the storageCarful planning required if SoFS is to be used with a DFS namespaceIncreased RAM and CPU overhead Requirement driven by heat and de-dup overheads

2012; single physical proc and 12GB of RAM was fine2012 R2, duel CPU and 128GB plus of RAMNetworkingPlan for SMB multi-channelIf using RDMA no teaming optionsAs SoFS is a clustered solution; separate IP required for each NIC interfacePlanning considerations on

Hyper-V hostsLACP is an option, however potential challengesDistribution hash settings on switchesSlide22

Converged or disaggregated storage?

Should storage be with compute or separate.

IT industry hot topicSlide23

Disaggregated

Phillip Moss, NTTX

Allows compute and storage to scale

independently

Removes

bottleneck of storage on a specific hyper-visorDrives down operational costs and scale-out increasesSlide24

NetworkSlide25

Software defined networking 101

Data delivery via standard protocolSMB 3.0

Load-balancing and failover

Teaming

(switch agnostic)Load aggregation and balancingSMB multi-channel Commodity L2 switchingCost effective networking (Ethernet)

RJ45SFP+QSFP+Quality of ServiceMultiple-levelsHost workload overhead reductionRDMAEasily ScaleSlide26

Switch Agnostic NIC Teaming

Integrated Solution for Network Card Resiliency and load balancing

Vendor agnostic and shipped inbox

Enables

teams of up to 32 NICsAggregates bandwidth from multiple network adapters whilst providing traffic failover in the event of NIC outageIncludes multiple nodes: switch dependent and independentMultiple traffic distribution algorithms: Hyper-V Switch Port, Hashing and Dynamic Load Balancing

NIC Teaming

Physical

Network

adaptors

Team network adapter

Team network adapter

Operating systemSlide27

RDMA Considerations

Why RDMAGreatly reduces overhead of CPU in Hyper-V hosts

Allows more resources for running VM’s

Improved data transport speed

Two main Ethernet based optionsRoCEiWARPPrimary vendor optionsChelsioMellonox

ComparisoniWARP RoutableRoCERequires DCB (Data Centre Bridging)Confirm support is available from your switching platformWhich is better?Pro’s and cons to bothOperational throughputSetup complexityDeployment scenario requirements; is routing between subnet’s requires,

etcInfiniband an optionIf you have investment is IB an excellent route to takeSlide28

Now is the time for RDMA:

Hardware

is available

Software is mature

Vendors are on-boardSlide29

To Be Or, Not To Be (Converged) – That is that question

Converged networking = Big Win’s

Reduces complexity and cost

Increases flexibly

Fully convergedSingle network, no dedicated service networksUse Windows networking capability to define systemNIC TeamingHyper-V vSwitch QoS

SMB QoSUniversal vSwitch bindingParent loopback for SMB data to hostGain complete control over QoSExcellent resource utilization, managing networking resources between workloadsSMB 3.0VM trafficLive Migration

Semi-ConvergedDedicated NIC’s for SMB 3.0Dedicated (teamed) NIC’s for VM trafficCritical for RDMA deployments in 2012 R2RDMA can’t work via vSwitchNo teaming supportSlide30

Parent OS

Switch agnostic team

F

ully converged

pNIC

pNIC

pNIC

pNIC

vSwitch

VM

VM NIC

VM

VM NIC

VM

VM NIC

VM NIC

VM NIC

VM NIC

VM NIC

QoSSlide31

Parent OS

Switch agnostic team

Semi converged

pNIC

pNIC

pNIC

pNIC

vSwitch

VM

VM NIC

VM

VM NIC

VM

VM NIC

QoS

RDMASlide32

Gen 1; 1Gbps using multiple connectionsVery cheap NIC’s and switch ports

Attractive as teaming / SMB multi channel in Windows made this viableCabling nightmares

Gen 2; 10Gbps using multiple connections

Cost viable due to NIC and port cost reductions

Significant throughput achievable with 4 connection in each serverCabling challenges remainDeployment over very cost effective RJ45Gen 3; 40GbpsNIC and port costs are still high, available speed makes tradeoff acceptableRequires QSFP+Expensive cables and transceivers

Performance from only 2 ports very highCabling issues mitigatedAvoids the requirement for VM teaming mitigates many vRSS challengesvRSS places significant overhead on hostMakes very high performance VM’s simpler to deploy with increased flexibility

Network Speed ChoicesSlide33

ComputeSlide34

Hyper-V 3.063 node clusters

8000 VM limit preventing clusters approaching this numberModern hardware allows for huge VM counter for nodeSMB 3.0 support

Dynamic RAM

vGPU

supportInter-cluster version live migrationKey for 2012 to 2012 R2 migrationsLive migration compressionSMB prioritisation

Hyper-V – The basic’sSlide35

SMB 3.0 supportIntroduced in Server 2012

Access VHDx over file, not block storage

\\servername\sharename

Replaces iSCSI or FC solutions

Simplifies Hyper-V solution designScale out / scale-up require less plumbingSupported from multiple vendors:MS scale out file server

SAN providersDe-Coupled data deliverySlide36

Generation 2 VM’sUEFI based

Secure boot supportWDS support without using legacy NICNo support for IDE

VHDx

Dynamic

VHDx resizeEnables dynamic increase or decrease in VHDx size without taking VM offlineKey feature for IaaS clientsDynamic quorum selectionIntroduced in 2012 R2

Very useful for clusters that grow over timeHyper-V 2012 R2Slide37

Introduced in 2012 R2Enables 100% VHDx

based VM storage solutionRemoves relicense on synthetic iSCSI or FC for shared storage in VMPrimary workloads

HA file-servers

Legacy SQL servers

Bespoke line of business application requiring shared diskConsiderationsNo support for Hyper-V replicaTherefore no Hyper-V Azure Site Recovery supportStretch clusters cannot be created

VM based clusters using shared VHDxSlide38

vRSS support new available on

vNIC’sAddress limitations of vNIC

being limited to 1 CPU core and therefore maxing out

Allows for very high performance VM’s

vRSS puts significant load on host CPUNo vRSS to parent; not viable to drive high network bandwidth into parent.New teaming algorithm; DynamicCombines Hyper-v port with address hashRecommended setting

VM based teamingDriving vNIC’s at above wire speed of physical NIC is very difficultAvoid teaming though the use of higher speed physical NIC’sSRIOV – Choices and trade offsKey for low latency / high-performance VM workloadsLimits VM deployment options as requires host with dedicated spare NIC’sDedicated NIC requirements increase if VM requires HA NIC capabilityFor a service provider, this level of rigidity creates considerable challenges

Networking – Hyper-VSlide39

Quality of Service – Hyper-V

Storage QoS

Define storage IOPS limits on a per VM bases.

v

-Switch is your friendDefine QoS behaviours on a per VM basesIf using parent loopback, define QoS to control SMB trafficQoS helps you deal with “noisy

neighbour” syndromeNo solution for CPU or RAM challenges todaySMB contains it’s own channel prioritisation logicSlide40

Introduced in 2012 R2Enables 100% VHDx

based VM storage solutionRemoves reliance on synthetic iSCSI or FC for shared storage in VMPrimary workloads

HA file-servers

Legacy SQL servers

Bespoke line of business application requiring shared diskConsiderationsNo support for Hyper-V replicaTherefore no Hyper-V recovery manager supportStretch clusters cannot be created

VM based clusters using shared VHDxSlide41

Hyper-V FeaturesSlide42

Hyper-V Replica

Failover and DR solution for VM’s

Supports point in time replication of VM’s

Planned and unplanned failover support

Off network target supportRemote network IP injection (into vNIC)2012 R2 ImprovementsReduction in IO overheadSupport for tertiary replication locationChoice of replication interval

IO overhead increases with replication interval frequencyMulti-VM applications are potentially complex to manageConsider Azure Site Recovery as a good solutionCovered in more detail in my later sessionSlide43

Hyper-V Network Virtualization

Tenant “bring your own subnet”

Introduced in 2012

Solves the requirement to do

vLAN taggingMitigates the 4096 vLAN’s celling Introduced in 2012Perimeter breakout was challenging.Required 3rd party solution

Complex to deploy and manageMulti-Tenant Site-to-Site gateway : 2012 R2Allows edge routing of virtualized networkFull support for layer 3 routingFull support for IPSEC site 2 site gatewaysImplemented as NVGRE aware route inside of RRAS service400Mbps maximum throughput per tunnelSlide44

Virtual network isolation

Physical server

Physical network

Green virtual

machine

Purple virtual

machine

Purple network

Green network

VIRTUALIZATIONSlide45

Network virtualization gateway

Bridge Between VM Networks and Physical

Networks

Multi-tenant VPN gateway built-in to Windows Server 2012 R2

Integral multitenant edge gateway for seamless connectivity

Guest clustering for high availability

BGP for dynamic routes update

Multitenant

aware NAT forInternet access

Contoso

Fabrikam

Resilient

HNV

Gateway

Resilient HNV

Gateway

Internet

Resilient

HNV

Gateway

Service Provider

Hyper-V Host

Hyper-V HostSlide46

Upgrade considerationsSlide47

Migration 2012 – 2012 R2

No in place upgrade of Scale Out File Server

Drain clear all storage, wipe and rebuild

Requires significant headroom in storage capacity

Must use storage migrationRDMA highly recommended between SoFS nodes to increase storage migration performanceNo in place upgrade of compute cluster in 2012 R2Live migration between 2012 and 2012 R2 Hyper-V hosts If short of Host capacity; use evict, upgrade, rejoin

PowerShell highly recommendedvNext – In place upgrades are supportedScale out file serverHyper-V clusterSlide48

ServicesSlide49

RDP 8.1UDP Support

vGPUDirectX 11.1Audio / Video

Touch

Remoting

USB bus redirectionTouch and audio / video performance improvementsConnection / Reconnection performance improvementsScreen / resolution dynamic resizeComing in vNext – Greatly enhanced vGPU capabilitiesOpen

GL 4.4Open CL 1.1Dedicated vRAM allocationRemote Desktop ServicesSlide50

High-level design

Storage - SoFSNetwork – SMB and converged software defined networking

Compute – Hyper-V

2012 R2 Changes and Improvements

Deployment and upgrade considerationsServicesHyper-V network virtualization (NVGRE)Remote Desktop ServicesSummarySlide51

QuestionsSlide52

CDP-B325 Design Scale-Out File Server Clusters with Direct Attach Storage in the Next Release of Windows Server

- Friday, 8:30 AM - 9:45

AM

CDP-B222 Software Defined Storage in the Next Release of Windows Server

- Tuesday, 5:00 PM - 6:15 PMCDP-B358 Windows Server Data Deduplication at Scale: Dedup Updates for Large-Scale VDI and Backup Scenarios

-Friday, 2:45 PM - 4:00 PM Related contentSlide53

Come

visit us

in the Microsoft Solutions Experience (MSE)!

Look for the

Cloud and Datacenter Platform

area TechExpo Hall 7For more information

Windows Server Technical Preview

http://technet.microsoft.com/library/dn765472.aspx

Windows Server

Microsoft Azure

Microsoft Azure

http://azure.microsoft.com/en-us/

System Center

System Center Technical Preview

http://

technet.microsoft.com/en-us/library/hh546785.aspx

Azure Pack

Azure Pack

http://

www.microsoft.com/en-us/server-cloud/products/

windows-azure-packSlide54

Resources

Learning

Microsoft Certification & Training Resources

www.microsoft.com/learning

Developer Network

http

://developer.microsoft.com

TechNet

Resources for IT Professionals

http://microsoft.com/technet

Sessions on Demand

http://channel9.msdn.com/Events/TechEdSlide55

Please Complete An Evaluation FormYour input is important!

TechEd Schedule Builder

CommNet

station

or PC

TechEd Mobile

app

Phone or Tablet

QR codeSlide56

Evaluate this sessionSlide57

© 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.

The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.