Introduction to SMB3

Server Message Block (and often, incorrectly referred to as CIFS) has been the mainstay of windows file servers since the days of NetBIOS, but version 2, and the improvements it brought with it were released in 2006 with Windows Vista and Server 2008 While SMB2 brought with it some much needed improvements, including limiting the chattiness of SMB over the wire (essential in WAN environments), it still lacked the and capabilities (and thus enterprise approval) of NFS, during the time between SMB2 and SMB3 we have seen users migrate away from Windows file servers to array based file storage, away from block storage to NFS for virtualization, but all that is about to change.

Server Message Block 3

With the advent of Windows Server 2012, Microsoft has released SMB3, far more than a mere upgrade to SMB2, SMB3 features important new enterprise features, rich client capabilities, and performance through roof, a far cry from its predecessors

SMB Direct (RDMA)

Utilizing RDMA network devices and SMB Direct, SMB can bypass the NIC and transport layer drivers and communicate directly with the RDMA NIC, this bypass increases performance and lower latency significantly to near wire speeds, and with InfiniBand connectivity those wire speeds can comfortably reach 50GBps on a single port.  SMB Direct can be coupled with SMB Multichannel to provide a reliable and highly available network topology for low latency file server access, enabling the desired file level application support we are increasingly seeing. As file servers become larger and larger with files often nearing the billions, this improvement helps overcome one of the major bottlenecks of file server performance.

SMB Multichannel

While SMB Direct allows for low latency and high throughput RDMA links, without SMB Multichannel it would still lack a certain enterprise comfort level, SMB failures have always been interrupting to users at the least, and catastrophic at worst. SMB Multichannel allows for seamless use of all network interfaces (can be combined with network teaming) with a near linear performance improvement (demos at the Build conference had 4 10Gbe ports pulling 4.5GB/s of throughput. Even a single NIC that supports Receive Side Scaling (RSS) can benefit from the new multichannel capabilities by establishing multiple TCP connections that allow load balancing of any CPU load across cores and CPUs, rather than the single core affinity used by a single TCP connection. When pushing a lot of small I/O over a large interface such as 10Gbe this becomes essential. Clients that support SMB3 will automatically utilize multiple channels when RSS is configured, and multiple NICS when they are available.

SMB Application Shares

With the combination of the two new SMB features listed above, and the myriad of improvements to networking and storage in Windows Server 2012 we finally have the capability to provide certain enterprise applications with file level storage. This move vastly simplifies already complex enterprise application deployments by abstracting a lot of low end storage architecture away from the application architecture, while giving us yet another option for storage of large, complex and performance demanding systems. Currently SQL Server 2012 and Hyper-V 2012 VHDX files are supported in an SMB3 environment, and application shares provide the performance and availability inherent with SMB Direct and SMB Multichannel at an SMB cluster level, providing us a single namespace spanning multiple servers.

Conclusion

All of these features and capabilities are helping bring the file server back to a Windows server, and although major vendors such as EMC and NetApp will be supporting SMB3, it is unknown if they will support the full gamut of features and capabilities, or the timeframe to reach this level of compatibility.

As file systems get larger and larger and our hunger for data ever increases, it becomes that much more critical that our file server infrastructure can scale, and perform to meet our demands. Windows Server 2012 and SMB3 help us get there, today.

Light Peak is dead.. Long live Thunderbolt

Intel has finally realized a commercial package for it’s light peak initiative, in the form of Thunderbolt.  Apple were the first to bring this to bear in the new MacBook Pro lineup announced last week, however Intel have been quick to claim that this will not be an Apple exclusive technology and will be available to other partners and OEMS.

Despite the name, and initial plans, Thunderbolt is currently based on an electrical medium, not an optical one, which shuns away from the initial concept of an optical interconnect for high demand peripherals and buses, but Intel have committed to continuing work on an optical option in the future, stating that results from testing on the electrical side were far better than expected, and keep both costs and complexity down for this initial offering.

Change of plans?
Light Peak was destined to be a transport medium, not a protocol itself, it wasn’t set to replace USB or FireWire, but the physical mediums used to connect these devices.  The consensus initially was that USB may well be the protocol of choice, but Intel have opted for a combination of Display port and PCI Express thus far.

This diagram from Intel shows a simplified version of how the technology works

Thunderbolt_processDiag

As you can see, the Thunderbolt controllers at both ends (say, a monitor and a MacBook Pro) combine the signals from the two sources to cross a single cable, this allows the single mini display port on a MacBook Pro to provide the video signal to the monitor, as well as other peripheral connectivity.  Like USB, the ability to daisy chain these connections is built in, for example, allowing a monitor to have Thunderbolt ports for other connections back to the MacBook Pro

Utilizing PCIe in this manner provides some interesting possibilities, by extending the bus to remote devices there is potential to connect numerous other controllers directly to the PCIe bus on the remote device, and connect seamlessly to the host system via the single Thunderbolt cable.  For instance, rather than just finding USB ports on a monitor, a manufacturer could build an entire controller into the monitor for USB, FireWire, eSATA and have those controllers connect to the PCIe bus of the host system via Thunderbolt.  This opens up some interesting possibilities in deployment options for vendors, as well as streamlining the way we connect peripherals to the host system (I for one have very few spare ports on the back of my systems at present, a way to streamline more effectively than multiple USB hubs is always appreciated!

The downside to this is obviously the extension of the PCIe bus outside of the host system, which has already caused some parties to claim security concerns, although this is no different than with existing bus extension technologies that operate at such low hardware layers, such as Express Card and FireWire.

Lots of bits, not a lot of cable
The most staggering achievement of the new technology is the bandwidth it brings to consumer devices, each Thunderbolt port provides two full duplex, bi-directional 10Gbps channels totaling 40Gbps, although only adds display port 1.1a support on top of this, rather than the newer 1.2 standard, even so, this amounts to a combined total of almost 60Gbps of bandwidth, from that single port!

The potential for this technology is quite astounding, and with bandwidth like that there are a myriad of new ways of approaching connectivity that could be imagined, however the standard at present is an Intel only offering, requiring the purchase of controllers from Intel, this itself could hinder the protocols adoption by third parties, especially ones loyal to competitors such as AMD, which would ultimately undermine the growth of the standard.

Look out for compatible devices from Promise and Lacie already announced, as well as other vendors in the near future