/
on Microsoft Technology on Microsoft Technology

on Microsoft Technology - PDF document

skylar
skylar . @skylar
Follow
343 views
Uploaded On 2021-05-15

on Microsoft Technology - PPT Presentation

Update Azure Sizing Datacenter Dublin Ireland Datacenter Dublin Ireland Project Natick httpsnatickresearchmicrosoftcom United States United States Canada Mexico Venezuela Colombia Peru B ID: 834755

ssd azure core drive azure ssd drive core processor memory hard network nic ghz 960 gib server windows gen

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "on Microsoft Technology" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1 Update on Microsoft Technology Azure Si
Update on Microsoft Technology Azure Sizing Datacenter: Dublin, Ireland Datacenter: Dublin, Ireland Project Natick https://natick.research.microsoft.com/ United States United States Canada Mexico Venezuela Colombia Peru Bolivia Brazil Argentina Atlanta Ocean Algeria Mali Niger Nigeria Chad Libya Egypt Sudan Ethiopi

2 a Dr Congo Angola Zambia Nambia South A
a Dr Congo Angola Zambia Nambia South Africa Greenland Svalbard Sweden Norway United Kingdom France Poland Ukraine Turkey Saudi Arabia Iran Kazakistan India Russia China Myanmar (Burma) Indian Ocean Indonesia Australia Pacific Ocean Pacific Ocean Data center Owned capacity Future capacity Leased capacity Edge site

3 DCs and network sites not exhaustive Az
DCs and network sites not exhaustive Azure inter - DC dark fiber backbone Geos and regions The world is divided into geographies A region is defined by a bandwidth and latency envelope Region 1 Region 2 100’s of km https://docs.microsoft.com/en - us/azure/best - practices - availability - paired - regions Availa

4 bility Zones | intra - region resilience
bility Zones | intra - region resilience AZs provide for HA in the face of localized DC and software failures Customer promise: a failure in one AZ should not cause other AZs within the same region to fail Different water, power lines, network, generators Customers can do application - level synchronous replicatio

5 n between AZs Three is enough for quoru
n between AZs Three is enough for quorum Regions offer multiple Availability Zones ( AZs ) At least three AZs An AZ consists of one or more datacenters Region Subscription 1 Subscription 2 Stamps – Fault Domains – Upgrade Domains Azure networking DC Hardware Services Intra - Region WAN Backbone Edge and Expr

6 essRoute CDN Last Mile • SmartNIC /FPG
essRoute CDN Last Mile • SmartNIC /FPGA • SONiC • Virtual Networks • Load Balancing • VPN Services • Firewall • DDoS Protection • DNS & Traffic Management • DC Networks • Regional Networks • Optical Modules • Software WAN • Subsea Cables • Terrestrial Fiber • National Clouds • Internet

7 Peering • ExpressRoute • Accelerati
Peering • ExpressRoute • Acceleration for applications and content • E2E monitoring (Network Watcher, Network Performance Monitoring) Enterprise DC/Corpnet Consumers Regional Network Microsoft WAN Edge ExpressRoute CDN Enterprise, SMB, mobile Azure Region ‘A ’ Azure Region ‘ B ’ Regional Netwo

8 rk Regional Network Regional Network I
rk Regional Network Regional Network Internet Exchanges Cable Carrier RNG regional architecture Microsoft backbone DC Region RNG RNG DC DC DC DC DC DC Region Contiguous geographical area up to roughly 100km in diameter (2.0ms RTT ) Regional network gateway Massively parallel, hyper scale DC interconnect Space and

9 power protected Data centers Small, Med
power protected Data centers Small, Medium, or Large (T - shirt sizes) Only contains server racks, DC network RNGs are sized to support growing the region by adding data centers Azure Network Emulator What it is Containerized router VMs linked via VXLAN tunnels to create a faithful replica of production network â

10 €œBug compMtiNle” emulMtion of produc
€œBug compMtiNle” emulMtion of production network gives network engineers realistic test environment Status Used daily to de - risk major network operations Over 12 million core - hours spent on emulation in last six months Numerous bugs caught before hitting production network Azure SONiC Virtual links Hypersc

11 ale SDN Management plane Create a tenant
ale SDN Management plane Create a tenant Control plane Plumb tenant ACLs to switches Data plane Apply ACLs to flows Switch (Host) Management plane Data plane SDN Control plane Azure Resource Manager Controller Key to flexibility and scale is Host SDN Proprietary appliance Management Control Data Azure Infrastructur

12 e Azure architecture Hardware Manager Re
e Azure architecture Hardware Manager Resource Provider Azure Fabric Controller Compute Networking Azure Resource Manager Storage Azure Portal CLI 3rd party Authentication Telemetry & Insights RBAC Service Fabric AKS PaaS offering Web Apps Azure Infrastructure Hardware Manager Azure SDN architecture Azure SDN The b

13 asis of all NW virtualization in our da
asis of all NW virtualization in our datacenters VNet The logical network for all workloads regardless of chosen service model or application container Decoupled SDN allows compute to evolve and converge to a single allocator Azure Resource Manager Compute RP Network RP R egional Network Manager Network Stat

14 e Manager Software load balancer Direc
e Manager Software load balancer Directory Service Compute Controller Azure Storage architecture Geo replication Storage Stamp LB Partition Layer Front - Ends DFS Layer Intra - stamp replication Storage Stamp LB Partition Layer Front - Ends DFS Layer Intra - stamp replication Request network resources Inventory

15 sync Inventory sync RDS RNM NRP CRP USLB
sync Inventory sync RDS RNM NRP CRP USLB CDS NSM DCM SLB Network agent Datacenter Manager agent Tenant Manager agent Load balancer agent ARM (Azure Resource Manager) NSM pushes CA:PA mappings to RDS CDS pulls from RDS NMAgent pulls from CDS RNM makes network object updates RNM gets info from TM SLB fi

16 nds its VIP ranges from USLB NSM pushe
nds its VIP ranges from USLB NSM pushes VIP ranges to USLB SLBHP is configured with SLB endpoint Inventory sync Send goal state Send goal state Send goal state NRP is pass - through AllocateNtwkResources LBProgramming Ntwk programming Global Regional Cluster Node TM Azure compute architecture Azure Hardware

17 Gen 2 Processor 2 x 6 Core 2.1 GHz Memor
Gen 2 Processor 2 x 6 Core 2.1 GHz Memory 32 GiB Hard Drive 6 x 500 GB SSD None NIC 1 Gb/s Gen 3 Processor 2 x 8 Core 2.1 GHz Memory 128 GiB Hard Drive 1 x 4 TB SSD 5 x 480 GB NIC 10 Gb/s Gen 4 Processor 2 x 12 Core 2.4 GHz Memory 192 GiB Hard Drive 4 x 2 TB SSD 4 x 480 GB NIC 40 Gb/s Godzilla Processor 2 x 16 Core

18 2.0 GHz Memory 512 GiB Hard Drive None
2.0 GHz Memory 512 GiB Hard Drive None SSD 9 x 800 GB NIC 40 Gb/s Gen 5 Processor 2 x 20 Core 2.3 GHz Memory 256 GiB Hard Drive None SSD 6 x 960 GB PCIe Flash and 1 x 960 GB SATA NIC 40 Gb/s + FPGA Beast Processor 4 x 18 Core 2.5 GHz Memory 4096 GiB Hard Drive None SSD 4 x 2 TB NVMe, 1 x 960 GB SATA NIC 40 Gb/

19 s Gen 6 Processor 2 x Skylake 24 Core 2
s Gen 6 Processor 2 x Skylake 24 Core 2.7GHz Memory 768GiB DDR4 Hard Drive None SSD 4 x 960 GB M.2 SSDs and 1 x 960 GB SATA NIC 40 Gb/s FPGA Yes Beast v2 Processor 8 x 28 Core 2.5 GHz Memory 12 TiB Hard Drive None SSD 4 x 2 TB NVMe, 1 x 960 GB SATA NIC 50 Gb/s 3x Beast Gen 2 Processor 2 x 6 Core 2.1 GHz Memory 32

20 GiB Hard Drive 6 x 500 GB SSD None NIC
GiB Hard Drive 6 x 500 GB SSD None NIC 1 Gb/s Gen 3 Processor 2 x 8 Core 2.1 GHz Memory 128 GiB Hard Drive 1 x 4 TB SSD 5 x 480 GB NIC 10 Gb/s Gen 4 Processor 2 x 12 Core 2.4 GHz Memory 192 GiB Hard Drive 4 x 2 TB SSD 4 x 480 GB NIC 40 Gb/s Godzilla Processor 2 x 16 Core 2.0 GHz Memory 512 GiB Hard Drive None SSD

21 9 x 800 GB NIC 40 Gb/s Gen 5 Processor
9 x 800 GB NIC 40 Gb/s Gen 5 Processor 2 x 20 Core 2.3 GHz Memory 256 GiB Hard Drive None SSD 6 x 960 GB PCIe Flash and 1 x 960 GB SATA NIC 40 Gb/s + FPGA Beast Processor 4 x 18 Core 2.5 GHz Memory 4096 GiB Hard Drive None SSD 4 x 2 TB NVMe, 1 x 960 GB SATA NIC 40 Gb/s Gen 6 Processor 2 x Skylake 24 Core 2.7GH

22 z Memory 768GiB DDR4 Hard Drive None SSD
z Memory 768GiB DDR4 Hard Drive None SSD 4 x 960 GB M.2 SSDs and 1 x 960 GB SATA NIC 40 Gb/s FPGA Yes Beast v2 Processor 8 x 28 Core 2.5 GHz Memory 12 TiB Hard Drive None SSD 4 x 2 TB NVMe, 1 x 960 GB SATA NIC 50 Gb/s 3x Beast Azure Hardware Gen 2 Processor 2 x 6 Core 2.1 GHz Memory 32 GiB Hard Drive 6 x 500 GB S

23 SD None NIC 1 Gb/s Gen 3 Processor 2 x 8
SD None NIC 1 Gb/s Gen 3 Processor 2 x 8 Core 2.1 GHz Memory 128 GiB Hard Drive 1 x 4 TB SSD 5 x 480 GB NIC 10 Gb/s Gen 4 Processor 2 x 12 Core 2.4 GHz Memory 192 GiB Hard Drive 4 x 2 TB SSD 4 x 480 GB NIC 40 Gb/s Godzilla Processor 2 x 16 Core 2.0 GHz Memory 512 GiB Hard Drive None SSD 9 x 800 GB NIC 40 Gb/s Gen 5

24 Processor 2 x 20 Core 2.3 GHz Memory 25
Processor 2 x 20 Core 2.3 GHz Memory 256 GiB Hard Drive None SSD 6 x 960 GB PCIe Flash and 1 x 960 GB SATA NIC 40 Gb/s + FPGA Beast Processor 4 x 18 Core 2.5 GHz Memory 4096 GiB Hard Drive None SSD 4 x 2 TB NVMe, 1 x 960 GB SATA NIC 40 Gb/s Gen 6 Processor 2 x Skylake 24 Core 2.7GHz Memory 768GiB DDR4 Hard Dri

25 ve None SSD 4 x 960 GB M.2 SSDs and 1 x
ve None SSD 4 x 960 GB M.2 SSDs and 1 x 960 GB SATA NIC 40 Gb/s FPGA Yes Beast v2 Processor 8 x 28 Core 2.5 GHz Memory 12 TiB Hard Drive None SSD 4 x 2 TB NVMe, 1 x 960 GB SATA NIC 50 Gb/s 3x Beast Azure Hardware Azure Sphere Processor 1 x A7 Core @ 500 MHz Memory 4 MiB Hard Drive BYO SSD BYO WiFi 2.4/5.0 GHz 802

26 .11 b/g/n 0.00000000533 Beasts A History
.11 b/g/n 0.00000000533 Beasts A History of Azure VM Sizes History of Series A History of Azure VM Sizes Today Computing options for every workload ACU per vCPU $ per kACU Everything • Created for comparing compute performance across VM families • Helps to quickly identify the VM family that meet your perfo

27 rmance goals • ACUs were first defined
rmance goals • ACUs were first defined on tOe “StMndMrd_A1” • SmMll “StMndMrd_A1” = 100 ACU • All other VM ACU measurements are referenced from this baseline Av2 B Fv1 Dv2 H Dv3 Ev3 Fv2 M $ per kACU ACU per vCPU Azure Compute Units (ACUs) 0 50 100 150 200 250 300 350 A0 A1 - A4 A5 - A7 A1_v2 - A8_v

28 2 A2m_v2 - A8m_v2 A8 - A11 D1 - D14 D1_v
2 A2m_v2 - A8m_v2 A8 - A11 D1 - D14 D1_v2 - D15_v2 DS1 - DS14 DS1_v2 - DS15_v2 D_v3 Ds_v3 E_v3 Es_v3 F2s_v2 - F72s_v2 F1 - F16 F1s - F16s G1 - G5 GS1 - GS5 H L4s - L32s L8s_v2 - L80s_v2 M ACU vs CPU ACU \ vCPU Benchline 100 ACU/vCPU $0.36/ kACU ~21 ACU/vCPU $0.57/ kACU 210 ACU/vCPU $0.27/ kACU 160 ACU/vCPU $0.30/ kACU

29 160 ACU/vCPU $0.42/ kACU 195 ACU/vCPU
160 ACU/vCPU $0.42/ kACU 195 ACU/vCPU $ 0.22 / kACU ACUs N/A for GPUs 290 ACU/vCPU $0.33/ kACU D15 v2 DS15 v2 E64i v3 E64is v3 D72s_v2 G5 GS5 M128s Entry Level 1 vCPU - 128 vCPUs Computational performance Azure Tiers Standard HDD ✓ Cost effective ✓ Dev & Test Workloads Standard SSD ✓ Cost effective ✓ Co

30 nsistent performance ✓ Low IO busine
nsistent performance ✓ Low IO business critical apps Ultra SSD ✓ Sub - ms Latency ✓ Consistent performance ✓ Latency sensitive top tier apps Premium SSD ✓ Low Latency ✓ Consistent performance ✓ IO Intensive business critical apps Resource Group ✓ Simple ✓ Highly available & scalable ✓

31 Secure by default Managed Disks Performa
Secure by default Managed Disks Performance Tiers Azure Disks – Performance Standard HDD Ultra SSD Premium SSD Standard SSD Provisioned Best effort Project Direct Drive | Ultra SSD Capacity up to 64* TB Variable IOPS up to 160,000 Variable throughput up to 2000 MB/s Low Latency 1ms High throughput up to 2000 MB/s

32 per disk High IOPS up to 160,000 IOP
per disk High IOPS up to 160,000 IOPS per disk Disk Provisioning Disk Provisioning SSD Provisioning VM/Network Provisioning Server SSD 5k IOPS, 200MB/s 5k IOPS, 200MB/s 4k IOPS, 32MB/s 3,200 IOPS, 48MB/s 8k IOPS, 64MB/s 6,400 IOPS, 96MB/s 32 k IOPS, 256 MB/s 25,600 IOPS, 384MB/s Managed Disk Managed Disk Premiu

33 m Storage Caching SLA and high availabi
m Storage Caching SLA and high availability in Azure Single VM VM SLA 99.9 % Availability sets Availability zones Site Recovery & Region pairs VM SLA 99.95% VM SLA 99.99% Regions 54 Region 1 Region 2 “toor Man’s” High Availability - Scenario • For cases where running two VMs is just too expensive: • Pr

34 epare 2 VMs in an availability Set • K
epare 2 VMs in an availability Set • Keep one of them turned off • When you plan to update one of the VMs, turn on the stand - by VM • Turn it off again when done • Gaps: • Time to recover • Potential for Data loss • Update coordination • Value: easy and better then nothing Reserved VM Instances (RIs

35 ) • Significantly reduce costs, with o
) • Significantly reduce costs, with one - year or three - year terms on Windows and Linux virtual machines (VMs) • Exchange or cancel your Azure RIs at any time • Use instance size flexibility to use RIs across a VM groups • Integrated recommendations in Azure portal Azure Hybrid Benefit What is Azure Hyb

36 rid Benefit? An Azure benefit that enabl
rid Benefit? An Azure benefit that enables customers with Windows Server Software Assurance licenses to pay the less expensive non - Windows compute pricing when they upload and run their self - built Windows Server images on Azure What is the Customer Value Proposition? • Customer benefits from existing investmen

37 ts in Windows Server when moving to Azur
ts in Windows Server when moving to Azure • Customer receives additional value to their Windows Server Software Assurance investment • Azure Hybrid Benefit adds additional flexibility and value to Windows Server Standard and Datacenter CUSTOMER’S ON - PREMISE LICENSE LICENSE IMPACT FOR CUSTOMER WINDOWS SERVER A

38 ZURE ENABLEMENT No licensing concurrency
ZURE ENABLEMENT No licensing concurrency: a Window Server license cannot be assigned to other hardware while Azure Hybrid Benefit is being used. Customer with Windows Server Software Assurance are entitled to : • Two instances of 1 to 8 vCPUs or • One instance of up to 16 vCPUs • Stack licenses for VMs larg

39 er than 16 vCPUs Licensing concurrency:
er than 16 vCPUs Licensing concurrency: a Windows Server license can continue to be assigned both on premise and in an Azure environment at the same time. Azure Hybrid Benefit CUSTOMER’S ON - PREMISE LICENSE LICENSE IMPACT FOR CUSTOMER WINDOWS SERVER AZURE ENABLEMENT No licensing concurrency: a Window Server lic

40 ense cannot be assigned to other hardw
ense cannot be assigned to other hardware while Azure Hybrid Benefit is being used. Customer with Windows Server Software Assurance are entitled to : • Two instances of 1 to 8 vCPUs or • One instance of up to 16 vCPUs • Stack licenses for VMs larger than 16 vCPUs Licensing concurrency: a Windows Server lice

41 nse can continue to be assigned both o
nse can continue to be assigned both on premise and in an Azure environment at the same time. What is Azure Hybrid Benefit? An Azure benefit that enables customers with Windows Server Software Assurance licenses to pay the less expensive non - Windows compute pricing when they upload and run their self - built Wi