Claus Joergensen Principal Program Manager Microsoft Corporation WSV410 Agenda Remote File Storage for Server Applications ScaleOut File Server for application data Setup and configuration Clustered Shared Volumes ID: 247873
Download Presentation The PPT/PDF document "Continuously Available File Server: Unde..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
Continuously Available File Server: Under the Hood
Claus JoergensenPrincipal Program ManagerMicrosoft Corporation
WSV410Slide2
Agenda
Remote File Storage for Server ApplicationsScale-Out File Server for application dataSetup and configuration
Clustered Shared VolumesScale-Out File Server cluster groupScale-Out File Server scalabilitySMB Transparent FailoverThis session assumes familiarity with:Windows Server 2008 R2 Failover Clustering, including Cluster Shared VolumesWindows Server 2008 R2 File ServerSlide3
Remote File Storage for Server Applications
New scenario in Windows Server 2012
Server apps storing data files on file sharesExamples:Hyper-V VHD, configuration files, snapshots etc.SQL Server database and log filesIIS content and configuration filesBenefits:Easy provisioning and managementShare management instead of LUNs and zoningFlexibility
Dynamically relocate server in datacenter without needing to reconfigure network or storage access
Leverage network investments
Specialized storage networking infrastructure or knowledge is not required
Lower
CapEx and OpEx
Example:
File Server
File Server
Shared
Storage
Hyper-V Server
App Server
Web Server
DB Server
SQL Server
IISSlide4
Clustered File Server
Scale-Out File Server for Application Data
New clustered file server
Targeted for server app storage
Key capabilities*:
Dynamic scaling w. active-active
file shares
Fault tolerance with zero downtime
Fast failure recovery
Clustered Shared Volume cache
CHKDSK with zero downtime
Application consistent snapshotsSupport for RDMA enabled networksSimpler management
RequirementsWindows Failover Cluster with Clustered Shared VolumesBoth application server and file server cluster must be running Windows Server 2012
Application Servers
Single File System Namespace
Cluster Shared Volumes
Single Logical File Server (\\
fs
\share)
Data Center Network
(Ethernet,
InfiniBand
or combination)
*) Capabilities highlighted in orange are unique to Scale-Out File ServerSlide5
title
Setup and ConfigurationSlide6
Setup and Configuration
Install the necessary role on all nodesFile Server roleFailover Clustering feature
Create clusterNo special requirementsAdd cluster disks to Cluster Shared VolumesConfigure networks for:Client Access Points (CAP)Clustered Shared Volumes (CSV)Create File Server RoleSelect “Scale-Out File Server for application data”Give it a network name
Create file sharesSlide7
Windows PowerShell Example
#Install
Roles and Features Import-Module ServerManager Add-WindowsFeature
-name File-Services, Failover-Clustering, RSAT-Clustering
#Create
Failover Cluster
New-Cluster –Name
smbclu
–Node FSF-260403-07, FSF-260403-08, FSF-260403-09
#Add
Cluster Disk 1 to Cluster Shared Volumes Add-
ClusterSharedVolume -Name “Cluster Disk 1”
#Configure Cluster Network 1 for Client Access and Cluster Network 2 for CSV (may not be needed)
$(Get-ClusterNetwork -Name "Cluster Network 1").Role=3
$(Get-ClusterNetwork -Name "Cluster Network 2").Role=1#Create
Scale-Out File Server Add-ClusterScaleOutFileServerRole -Name smbsofs
#Create File Share
New-SmbShare -Name vm1 -Path c:\clusterstorage\volume1\vm1
–FullAccess domain\hvhost$Slide8
title
Cluster Shared VolumesSlide9
Cluster Shared Volumes File System
Fundamental to and required for Scale-Out File ServersScale-Out file shares
require CSVFS pathsSupports VSS for SMB file sharesCSVFS supports most NTFS features and operationsDetailed information available with Windows Server 2012 Release Candidate hereDirect I/O support for file data accessCaching of CSVFS file data (controlled by oplocks
)
Redirects I/O for metadata operations to coordinator node
Redirects I/O for data operations when a file is being accessed simultaneously by multiple CSVFS instances
Leverages SMB Direct and SMB Multichannel for internode communicationSlide10
Cluster Shared Volumes Caching
Improve CSV I/O Performance
Windows Cache Manager integrationBuffered read/write I/O is cached the same way as NTFS Clustered Shared Volumes Block CacheRead-Only cache for un-buffered I/OI/O which is excluded from Windows Cache ManagerDistributed cache guaranteed to be consistent across the clusterSignificant value for Pooled VM VDI scenariosEnabling CSV Block Cache:
SharedVolumeBlockCacheSizeInMB
– Cluster common property
0 = Disabled
Non-zero = the amount of RAM in MB to be used for cache on each cluster node
Recycling of resource is not needed
CsvEnableBlockCache - Physical Disk resource private property 0 = Disabled (default)
1 = Enabled for that clustered shared volumeRequires recycling the resource to take effectSlide11
CHKDSK with Clustered Shared Volumes
CHKDSK is seamless with CSVCHKDSK is significantly improved with scanning (online) separated from repair (offline)
With CSV repair is online as wellCHKDSK processing with CSVCluster checks (once a minute) to see if CHKDSK (spotfix) is requiredCluster enumerates NTFS $corrupt to identify affected filesCluster pauses the affected CSV file system (CSVFS) to pend I/O
The underlying NTFS volume is dismounted
CHKDSK (
spotfix
) is run against affected files for a maximum of 15 seconds to ensure application are not affected
The underlying NTFS volume is mounted and CSV namespace is un-paused
If CHKDSK (spotfix) did not process all recordsCluster will wait 3 minutes before continuing
Enables a large set of affected files to be processed over timeIf corruption is too largeCHKDSK (spotfix) is not run and marked to run at next Physical Disk onlineSlide12
title
Anatomy of a Scale-Out File ServerSlide13
Scale Out File Server group
ContainsDistributed Server NameScale-Out File Server
Group Type:ScaleoutFileServerResource Types:Scale Out File ServerDistributed Network Name
Get-
ClusterGroup
| ? {$_.
GroupType
-
eq
"
ScaleoutFileServer"} | FL Name, OwnerNode
, State, GroupType
Name : smbsofs33
OwnerNode : FSF-260403-07
State : OnlineGroupType : ScaleoutFileServer
Get-
ClusterGroup | ? {$_.GroupType
-eq "
ScaleoutFileServer"} | Get-ClusterResource
Name State OwnerGroup ResourceType
---- ----- ---------- ------------Scale-Out File Server Online smbsofs33
Scale Out File Serversmbsofs33 Online smbsofs33 Distributed Network
NameSlide14
Distributed Network Name (DNN)
Client Access Point (CAP) for a Scale-Out File Server
DNS Name on the networkSecurityCreates and manages computer object in ADRegisters credentials with LSA on each nodeDNSRegisters the CAP with DNSRegisters node IP address for all nodes Does not use virtual IP addressesDNN updates DNS whenDNN resource comes online and every 24 hours
A node is added or removed to/from cluster
A cluster network is added or removed as a client network
IP address changes
If
not using dynamic DNS, you must manually add the DNS records with the node IPs for the cluster networks enabled for client access for each node
> smbsofs33
Server: stb-red-dc-01.stbtest.microsoft.com
Address: 10.200.81.201
Non-authoritative answer:Name:
smbsofs33.ntdev.corp.microsoft.comAddresses: 2001:4898:0:fff:0:5efe:10.217.108.49
2001:4898:0:fff:0:5efe:10.217.108.103 2001:4898:0:fff:0:5efe:10.217.108.148
10.217.108.148 10.217.108.49 10.217.108.103
IPs on same subnet. One for each node.Slide15
Distributed Network Name (DNN)
DNS will round robin client DNS lookupsDNS sort IPv6 and IPv4 separately
Concatenates with IPv6 at topSMB client is resilient to unavailable IPsAttempts to connect to first IP addressAfter 1 second, client attempts the next 7 IP addressesIf any of the previous attempts fail, client attempts next IP addressClient will continue until it reaches end of list
Client will proceed with the first server to respond
S
MB client
Connects to one and only one cluster node for a given scale-out file server
Can connect to different cluster nodes for each scale-out file server
> smbsofs33
Server: stb-red-dc-01.stbtest.microsoft.com
Address: 10.200.81.201
Non-authoritative answer:Name: smbsofs33.ntdev.corp.microsoft.com
Addresses: 2001:4898:0:fff:0:5efe:10.217.108.49
2001:4898:0:fff:0:5efe:10.217.108.103 2001:4898:0:fff:0:5efe:10.217.108.148
10.217.108.148 10.217.108.49 10.217.108.103
> smbsofs33Server: stb-red-dc-01.stbtest.microsoft.comAddress: 10.200.81.201
Non-authoritative answer:Name: smbsofs33.ntdev.corp.microsoft.comAddresses:
2001:4898:0:fff:0:5efe:10.217.108.103 2001:4898:0:fff:0:5efe:10.217.108.148
2001:4898:0:fff:0:5efe:10.217.108.49 10.217.108.49 10.217.108.148
10.217.108.103Slide16
Scale Out File Server (SOFS)
Scale Out File Server resource is responsible forOnline scale-out file shares on each nodeListen to scale-out share creations, deletions and changes
Replicate changes to other nodesEnsure consistency across all nodes for the Scale-Out File ServerImplemented using cluster clone resourcesAll nodes run a SOFS cloneThe clones are started and stopped by the SOFS leaderThe SOFS leader runs on the node where the Scale Out File Server resource is onlineSlide17
Scale-Out File Server group behavior
The group is online on one of the nodesMoving the groupMoves the responsibility for coordination
Does not affect the availability of the name or sharesAdmin can constrain which cluster nodes can be usedModify “possible owners” list for DNN and SOFS resourceUseful if some nodes must be reserved for other workloadsSlide18
Client Redirection
SMB Clients are distributed at initial connect through DNS Round RobinSMB Clients are not redistributed automatically
SMB Clients connected to a Scale-Out File Server can be redirected to use a different cluster nodeScale-Out File Server Cluster
Node A
Node B
1
SQL Server
W
W
Node
C
W
Witness communication
SMB communication
1
Get-
SmbWitnessClient
|
FL
ClientName
,
FileServerNodeName
,
WitnessNodeName
ClientName
:
SQLServer
FileServerNodeName
:
A
WitnessNodeName
: B
Move-
SmbWitnessClient
–
ClientName
SQLServer
–
DestinationNode
C
3Slide19
Client Redirection Flow
Normal operation
SQL Server has SMB connection to Node ASQL Server has Witness connection to Node BAdministrator issues:Move-SMBWitnessClient –
ClientName
SQLServer
–
DestinationNode
C
Node B sends notification to SQL Server to move to Node CSQL Server disconnects from Node A and connects to Node CSMB client on SQL Server resumes handles
Scale-Out File Server Cluster
Node A
Node B
1
SQL Server
W
W
Node
C
W
Witness communication
SMB communication
1
4Slide20
Cluster Network Planning
SMB Client to SMB ServerUse cluster networks enabled for client access
If using multiple network adapters, each must be on separate IP subnetsCSV trafficMetadata updatesInfrequent for Hyper-V and SQL Server workloadMirrored Storage Spaces No storage connectivityPrefers cluster networks not enabled for client accessLeverages SMB Multichannel and SMB Direct (SMB over RDMA)Disable iSCSI
networks for cluster use, to prevent unpredictable latencies
Storage IO (FC,
iSCSI
, SAS)
Metadata
Mirrored Spaces
Storage Link Failures
SMB Client
To
SMB ServerSlide21
title
Scale-Out File Server Scalability and PerformanceSlide22
Test Bed Topology
SMB clients8 computers, each with 2x10Gbps
Scale-Out File Server cluster8 nodes, each with 2x10GbpsSAN Storage2x8Gbps FC Fabric to File Server4x4Gbps FC Fabric to StorageRAID 5
LUNS
2x10GbpsSlide23
Bandwidth Scalability
IOMeterParameters
512KiB IO size100% Sequential Read1 thread 144 outstanding IOs
Local
Remote
Overall throughput (
MiBps
)6,100
6,000Delta from local
~2%
Local
Remote
~2%
Preliminary results based on Windows Server 2012 Beta
Bottlenecked on 2x4Gbps FCSlide24
Hyper-V boot-storm
Local vs. Remote
Uses parent/diff VHDX8GB CSV block cacheFrom VM state change to user logon complete320 virtual machines / host2,560 virtual machines (8 host)CSV Cache Enabled vs. Disabled
Uses parent/diff
VHDX
8GB CSV block
cache
From VM state change to user logon complete
320 virtual machines / host5,120 virtual machines (
16 host)
Individual VM
Boot Time(in seconds)
Local
Remote
Minimum18
19Maximum
34
36Average
2325
Individual VM Boot Time(in seconds)
EnabledDisabled
Minimum
19
18
Maximum
61
1141
Average
29
211
With CSV cache enabled, 90% booted in <40s
Preliminary results based on Windows Server 2012 RCSlide25
title
SMB Transparent FailoverSlide26
Historical - Windows Server 2008 R2
Failovers are not transparent
Targeted for traditional file server use scenariosServer applications expect storage to be continuously availableIn Windows Server 2008 R2Connection and file handles are lost on share failover, leading toApplication disruptionAdministrator intervention required to recover
File Server Cluster
Node
A
Node
B
\\fs1\share
1
2
\\fs1\share
3
SQL Server
Failover share and connections and handles
lost
2
Normal operation
1
Administrator intervention needed to recover
3Slide27
File Server Cluster
Windows Server 2012
SMB Transparent FailoverFailover transparent to server applicationZero downtimeSmall IO delay during failoverSupports planned and unplanned failovers
Hardware/software maintenance
Hardware/software failures
Load balancing / Client Redirection
Resilient for both file and directory operations
Interoperable with both types of clustered file servers:Scale-Out File Server“Classic” File Server
Requires:Windows Server 2012 Failover ClusterSMB Client with SMB 3.0
File shares configured with Continuously Availability property (default)
Failure occurs - connections and handles
lost,
temporary
stall of IO
2
Normal operation
1
Connections and handles auto-recoveredApplication IO
continues with no errors
3
1
3
Node
A
Node
B
\\fs1\share
\\fs1\share
2
SQL ServerSlide28
SMB Transparent Failover
New components (1/2)SMB Client (Redirector)
Client operation replayEnd-to-end support for replayable and non-replayable operationsSMB ServerSupport for network state persistenceFiles are always opened Write-Through
SMB
Server
SMB
Client
SMB
Server
User
Kernel
User
Kernel
Witness
Service
Witness
Client
Witness
ProtocolSMB Redirector
File System
Resume Key Filter
SMB ServerSMB 3.0
Operation replay
State persistence
User
KernelSlide29
SMB Transparent Failover
New components (2/2)Resume Key Filter
Resume handle state after planned or unplanned failoverFence handle state informationWitness ProtocolEnables faster unplanned failover because clients do not wait for timeoutsEnables dynamic reallocation of load with Scale-Out File Servers
SMB
Server
SMB
Client
SMB
Server
User
Kernel
User
Kernel
Witness
Service
Witness
Client
Witness
ProtocolSMB Redirector
File System
Resume Key Filter
SMB ServerSMB 3.0
Operation replay
State persistence
User
KernelSlide30
Resume Key Filter
Overview
Resume handle state after planned or unplanned failoverPersist state information only for handles with continuous availability contextInstalls with Failover Clustering featureSits on file server file system stackAttaches to all cluster disksSlide31
Resume Key Filter
Features (1/3)
Protection of handle state so the client can reconnectFor example, needed when failure occurs when the client has an exclusive no-share handleBlock new handle creation until the previously known handles are resumed or cancelled (timed out)Protection from namespace inconsistencyNeeded when failure occurs as a file rename is in
flightSlide32
Resume Key Filter
Features (2/3)
Enable Create ReplayNeeded when failover occurs as a FILE_CREATE is in-flightRKF records the pre-existence state for the file BEFORE the create is passed down to NTFSAfter failover, the client re-issues the create as a ReplayOn receipt of the Replay, RKF figures out the correct processing for FILE_CREATE so that the client sees the correct resultNow exists: FILE_CREATE => FILE_OPEN and the return result is FILE_CREATEDSlide33
Resume Key Filter
Features (3/3)
Restoration of Delete Pending stateNeeded when a file has multiple handles open and has been marked for deletion when failover occursRKF holds Delete Pending state above NTFS so that existing handles can be resumed after failoverHandling for the change of the Read Only attributeNeeded when the read only attribute is changed with pre-existing
writers
RKF undoes the RO attribute to allow the restoration of the prior granted
access
Opaque storage for remote file system specific data
E.g. SRV stores information needed to resume Byte Range LocksSlide34
Resume Key Filter
Volume instance attachSlide35
SMB Witness
Overview
Enables faster recovery from unplanned failuresSMB clients do not need to wait for TCP timeoutsEnables dynamic reallocation of load with Scale-Out File ServersAdministrator can redirect SMB client to a different cluster node
Installs
with Failover Clustering
feature
Is a Service and runs on all cluster nodes
Not to be confused with Failover Cluster File Share WitnessSlide36
SMB Witness
Registration process
SMB client connects to \\fs1\share on Node A and notifies the Witness clientWitness client obtains list of cluster members from witness service on node AWitness client removes the data node (Node A) and selects a witness server (Node B)Witness client registers with Node B for notification on events for
\\fs1
Witness server on Node B registers with cluster infrastructure for event notification on
\\fs1
File Server Cluster
Node A
Node B
\\fs1\share
1
\\fs1\share
SQL Server
Witness
Witness
Witness communication
SMB communication
2
4Slide37
File Server Cluster
1
SMB Witness
Notification
process
Normal operation
SMB connection with Node A
Witness connection with Node B
Unplanned failure on Node A
Cluster infrastructure notifies Witness server on Node B
Witness server on Node B notifies Witness client that Node A went offline
Witness client notifies SMB client
SMB client drops its connection to Node A and starts reconnecting with another cluster node (Node B)Witness client attempts to select new Witness server
Node A
Node B
\\fs1\share
1
\\fs1\share
SQL Server
Witness
Witness
Witness communication
SMB communication
4
6Slide38
Enhanced and New Event Logs
Application and Services – Microsoft – Windows – SMBClient
Application and Services – Microsoft – Windows – SmbServerApplication and Services – Microsoft – Windows – ResumeKeyFilterApplication and Services – Microsoft – Windows – SMBWitnessClientApplication and Services – Microsoft – Windows – SMBWitnessService
Example:
SMB Transparent FailoverSlide39
demo
Claus JoergensenPrincipal Program ManagerWindows File Server Team
Scale-Out File ServerSlide40
The TechEd Cluster in a Box Demo Stack
Cluster in a Box prototypes
QuantaWistron LSI HA-DAS MegaRAID® and SAS controllersQuanta application servers, JBOD expansion, and 10GbE switch
Mellanox
IB FDR NICs and switch
OCZ SAS SSDs
Infrastructure
Domain Controller server
Power distribution unit1GbE switchKeyboard &
monitorMegaRAID® is a registered trademark of LSI CorporationSlide41
Remote File Storage for Server Applications
New scenario in Windows Server 2012
Server apps storing data files on file sharesExamples:Hyper-V VHD, configuration files, snapshots etc.SQL Server database and log filesIIS content and configuration filesBenefits:Easy provisioning and managementShare management instead of LUNs and zoningFlexibility
Dynamically relocate server in datacenter without needing to reconfigure network or storage access
Leverage network investments
Specialized storage networking infrastructure or knowledge is not required
Lower
CapEx and OpEx
Example:
File Server
File Server
Shared
Storage
Hyper-V Server
App Server
Web Server
DB Server
SQL Server
IISSlide42
Related Content
Breakout
Sessions
VIR306
Hyper-V
over SMB2: Remote File Storage Support in Windows Server
2012 Hyper-V
WSV303
Windows
Server
2012 High-Performance, Highly-Available Storage Using SMB
WSV310 Windows Server 2012: Cluster-in-a-Box, RDMA, and More
WSV314
Windows Server 2012 NIC Teaming and Multichannel Solutions
WSV322 Update Management in Windows Server 2012: Revealing Cluster- Aware UpdatingWSV330 How to increase SQL availability and performance using Window
Server 2012 SMB 3.0 solutionsWSV334 Windows Server 2012 File and Storage Services ManagementSlide43
SIA, WSV, and VIR Track Resources
Talk to our Experts at the TLC
#TE(
sessioncode
)
DOWNLOAD Windows Server 2012 Release Candidate
microsoft.com/
windowsserver
Hands-On Labs
DOWNLOAD Windows Azure
Windowsazure.com/
techedSlide44
Resources
Connect. Share. Discuss.
http://northamerica.msteched.com
Learning
Microsoft Certification & Training Resources
www.microsoft.com/learning
TechNet
Resources for IT Professionals
http://microsoft.com/technet
Resources for Developers
http://microsoft.com/msdn Slide45
Required Slide
Complete an evaluation on CommNet and enter to win!Slide46
Please Complete an Evaluation
Your feedback is important!
Multiple
ways to
E
valuate
S
essions
Be eligible
to win great daily prizes and the grand prize of a $5,000 Travel Voucher!
Scan the Tag
to evaluate this
session now
on
myTechEd
MobileSlide47
©
2012 Microsoft
Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.
The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the
part
of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation.
MICROSOFT
MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.Slide48Slide49
title
AppendixSlide50
SMB Transparent Failover Semantics (1/2)
Server Side: State persistence until client reconnects
Server obeys a contract with the client to ensure replay of operation is transparent to applicationAll race conditions cleanly addressedProtocol documentation will fully define behavior
Server State Preservation
State Affected
Comments
Preserved
(for persistent handle timeout interval) – for transient/permanent network disconnects, server failovers
In progress CREATEs
Replay, duplicate resolved via GUID
Opened file handles
Fenced until client replays the open with same GUID. Includes support for Desired Access, Share modes
Read/Write I/OsMust ensure all writes prior to failover are flushed before processing replay of reads or writes
In-progress byte range locksReplay, duplicate resolved via sequence numbers
Established byte range locksServer preserves,
client does not replaySticky timestampsOffice
interop
SMB2 FIDs describing open handlesOnly the persistent portion of SMB2 FID is neededNot preserved(client replays, etc)
Enumeration of dir & EAsClient restarts enumeration (Win32 API compliant)
CloseClient replaysChange notification queue/block
Client handles thisOplock state
Not Continuously Available – only used by down-level SMB clients, which don’t use CAMixed
In progress lease breaks
Replay if reconnect to same node. More
complex if new node.
Renegotiated
on reconnect
File and directory lease state
Renegotiated on open re-establishment.
Write+Handle
leases are preserved.
SMB 3.0
Server
User
Kernel
User
Kernel
SMB2 Redirector
File System
Resume Key Filter
SMB2 Server
SMB 3.0
O
peration
replay
State persistenceSlide51
SMB Transparent Failover Semantics (2/2)
Client Side: state recovery
Client obeys a contract with the server to ensure replay of operation is transparent to applicationAll race conditions cleanly addressedProtocol documentation will fully define behavior
State
Preservation Action
State Affected
Comments
Simple
Replay of
operation(requires server state to ensure correct operation)CREATE (file or directory)
Using prior Create GUID, issue “re-open”.
Read or Write I/Os
Replay (after Create is reconnected).
Rename/set DELETE_DISPOSITION
SMB2 FID or GUID used for open data handles, lease handles, opened for delete/rename handles.
In progress byte range lock requestsReplay - duplicates resolved via sequence numbers
FSCTLsReplay after re-open
Close
Replay (re-open, then close), but re-open fails, is okayAttempt to replay, potentially renegotiate
Directory Lease stateRenegotiation can cause directory cache flush.
File Lease stateWrite+Handle leases preserved, all else could be renegotiated
Cached file data & metadataWrite-Back data cache is preserved. May cause flush of metadata and/or read caches.
Other ActionGranted byte range locks
No replay – server preserves.
Enumeration State (
dir
and EAs)
Start enumeration over,
skip entries already returned.
Change notification queue/block
Complete to app with
error code to force re-enumeration/
requeue
.
SMB 3.0
Server
User
Kernel
User
Kernel
SMB2 Redirector
File System
Resume Key Filter
SMB2 Server
SMB 3.0
O
peration
replay
State persistenceSlide52
Cluster File Server – feature interoperability
Area
Feature / capability
Clustered File Server
“Classic”
Scale-Out
Data management
BranchCache
Data de-duplication
DFS Namespaces - Root
DFS Namespaces
- Leaf
DFS Replication
FSRM
(Quota, Screening, Reporting)
FSRM Classification
File Server VSS Agent
Folder Redirection
Client Side
Caching
Apps
Information Worker
Not
recommended
Hyper-v
SQL Server
Area
Feature / capability
Clustered File Server
“Classic”
Scale-Out
SMB Capabilities
SMB Transparent
Failover
SMB Scale-Out
SMB Multichannel
SMB Direct
SMB Encryption
File System
NTFS
ReFS
CSVFS