Introduction to SMB3

Server Message Block (and often, incorrectly referred to as CIFS) has been the mainstay of windows file servers since the days of NetBIOS, but version 2, and the improvements it brought with it were released in 2006 with Windows Vista and Server 2008 While SMB2 brought with it some much needed improvements, including limiting the chattiness of SMB over the wire (essential in WAN environments), it still lacked the and capabilities (and thus enterprise approval) of NFS, during the time between SMB2 and SMB3 we have seen users migrate away from Windows file servers to array based file storage, away from block storage to NFS for virtualization, but all that is about to change.

Server Message Block 3

With the advent of Windows Server 2012, Microsoft has released SMB3, far more than a mere upgrade to SMB2, SMB3 features important new enterprise features, rich client capabilities, and performance through roof, a far cry from its predecessors

SMB Direct (RDMA)

Utilizing RDMA network devices and SMB Direct, SMB can bypass the NIC and transport layer drivers and communicate directly with the RDMA NIC, this bypass increases performance and lower latency significantly to near wire speeds, and with InfiniBand connectivity those wire speeds can comfortably reach 50GBps on a single port.  SMB Direct can be coupled with SMB Multichannel to provide a reliable and highly available network topology for low latency file server access, enabling the desired file level application support we are increasingly seeing. As file servers become larger and larger with files often nearing the billions, this improvement helps overcome one of the major bottlenecks of file server performance.

SMB Multichannel

While SMB Direct allows for low latency and high throughput RDMA links, without SMB Multichannel it would still lack a certain enterprise comfort level, SMB failures have always been interrupting to users at the least, and catastrophic at worst. SMB Multichannel allows for seamless use of all network interfaces (can be combined with network teaming) with a near linear performance improvement (demos at the Build conference had 4 10Gbe ports pulling 4.5GB/s of throughput. Even a single NIC that supports Receive Side Scaling (RSS) can benefit from the new multichannel capabilities by establishing multiple TCP connections that allow load balancing of any CPU load across cores and CPUs, rather than the single core affinity used by a single TCP connection. When pushing a lot of small I/O over a large interface such as 10Gbe this becomes essential. Clients that support SMB3 will automatically utilize multiple channels when RSS is configured, and multiple NICS when they are available.

SMB Application Shares

With the combination of the two new SMB features listed above, and the myriad of improvements to networking and storage in Windows Server 2012 we finally have the capability to provide certain enterprise applications with file level storage. This move vastly simplifies already complex enterprise application deployments by abstracting a lot of low end storage architecture away from the application architecture, while giving us yet another option for storage of large, complex and performance demanding systems. Currently SQL Server 2012 and Hyper-V 2012 VHDX files are supported in an SMB3 environment, and application shares provide the performance and availability inherent with SMB Direct and SMB Multichannel at an SMB cluster level, providing us a single namespace spanning multiple servers.


All of these features and capabilities are helping bring the file server back to a Windows server, and although major vendors such as EMC and NetApp will be supporting SMB3, it is unknown if they will support the full gamut of features and capabilities, or the timeframe to reach this level of compatibility.

As file systems get larger and larger and our hunger for data ever increases, it becomes that much more critical that our file server infrastructure can scale, and perform to meet our demands. Windows Server 2012 and SMB3 help us get there, today.

Project Lightning (aka VFCache)

EMC today officially launched VFCache, the project previously known as Lightning

VFCache is a host side PCIe SSD product not totally dissimilar to products from Fusion-io in its mechanical operation, but possessing unification with the rest of the EMC suite of products, adding significant value to this version 1.0 offering, software is key here!

At its heart, VFCache allows IO to occur over the PCIe bus at lightning (no pun) speeds, approaching 4000x the IOps per GB than traditional magnetic media, and about 20x the IOps per GB compared to SSDs.  an amazing catch up step for technology that has remained rather stagnant for the last 20 years (drive IO per GB)

A few important facts about the release that I summarized from Chads blog (Virtual Geek)

  • Software is key, the hardware is inconsequential, but the initial partner vendor is Micron providing a 300GB unit
  • Support for a variety of Dell, Cisco, HP and IBM systems, but no Blades yet
  • Utilization in a VMware environment ties the VM to a local system, removing vMotion benefits
  • Primary use case for v1.0 is extremely high performance requirements, high read cache

Things to look out for, and that are already on the roadmap

  • De-Duped Cache (stealing tech from Avamar, Data Domain and Recoverpoint?)
  • Better integration with Arrays (VNX, VMAX)
  • Distributed Cache (read: VMware clusters operate properly with it?)
  • Bigger models
  • Mezzanine models for blades
  • MLC usage

and this leads ultimately to the evolution of the product line into Project Thunder, another initiative on the cards from EMC that extends VFCache to the network, small 2U or 4U offerings, TB of flash, millions of IO and strong integration with local VFCache systems

Most of the Project Thunder stuff is still under wraps, but it should be a very compelling offering, and an essential piece of larger VDI and heavy IO virtualization strategies, tech preview of this coming Q2 2012, probably at EMC World