Server Message Block (and often, incorrectly referred to as CIFS) has been the mainstay of windows file servers since the days of NetBIOS, but version 2, and the improvements it brought with it were released in 2006 with Windows Vista and Server 2008 While SMB2 brought with it some much needed improvements, including limiting the chattiness of SMB over the wire (essential in WAN environments), it still lacked the and capabilities (and thus enterprise approval) of NFS, during the time between SMB2 and SMB3 we have seen users migrate away from Windows file servers to array based file storage, away from block storage to NFS for virtualization, but all that is about to change.
Server Message Block 3
With the advent of Windows Server 2012, Microsoft has released SMB3, far more than a mere upgrade to SMB2, SMB3 features important new enterprise features, rich client capabilities, and performance through roof, a far cry from its predecessors
SMB Direct (RDMA)
Utilizing RDMA network devices and SMB Direct, SMB can bypass the NIC and transport layer drivers and communicate directly with the RDMA NIC, this bypass increases performance and lower latency significantly to near wire speeds, and with InfiniBand connectivity those wire speeds can comfortably reach 50GBps on a single port. SMB Direct can be coupled with SMB Multichannel to provide a reliable and highly available network topology for low latency file server access, enabling the desired file level application support we are increasingly seeing. As file servers become larger and larger with files often nearing the billions, this improvement helps overcome one of the major bottlenecks of file server performance.
While SMB Direct allows for low latency and high throughput RDMA links, without SMB Multichannel it would still lack a certain enterprise comfort level, SMB failures have always been interrupting to users at the least, and catastrophic at worst. SMB Multichannel allows for seamless use of all network interfaces (can be combined with network teaming) with a near linear performance improvement (demos at the Build conference had 4 10Gbe ports pulling 4.5GB/s of throughput. Even a single NIC that supports Receive Side Scaling (RSS) can benefit from the new multichannel capabilities by establishing multiple TCP connections that allow load balancing of any CPU load across cores and CPUs, rather than the single core affinity used by a single TCP connection. When pushing a lot of small I/O over a large interface such as 10Gbe this becomes essential. Clients that support SMB3 will automatically utilize multiple channels when RSS is configured, and multiple NICS when they are available.
SMB Application Shares
With the combination of the two new SMB features listed above, and the myriad of improvements to networking and storage in Windows Server 2012 we finally have the capability to provide certain enterprise applications with file level storage. This move vastly simplifies already complex enterprise application deployments by abstracting a lot of low end storage architecture away from the application architecture, while giving us yet another option for storage of large, complex and performance demanding systems. Currently SQL Server 2012 and Hyper-V 2012 VHDX files are supported in an SMB3 environment, and application shares provide the performance and availability inherent with SMB Direct and SMB Multichannel at an SMB cluster level, providing us a single namespace spanning multiple servers.
All of these features and capabilities are helping bring the file server back to a Windows server, and although major vendors such as EMC and NetApp will be supporting SMB3, it is unknown if they will support the full gamut of features and capabilities, or the timeframe to reach this level of compatibility.
As file systems get larger and larger and our hunger for data ever increases, it becomes that much more critical that our file server infrastructure can scale, and perform to meet our demands. Windows Server 2012 and SMB3 help us get there, today.
EMC today officially launched VFCache, the project previously known as Lightning
VFCache is a host side PCIe SSD product not totally dissimilar to products from Fusion-io in its mechanical operation, but possessing unification with the rest of the EMC suite of products, adding significant value to this version 1.0 offering, software is key here!
At its heart, VFCache allows IO to occur over the PCIe bus at lightning (no pun) speeds, approaching 4000x the IOps per GB than traditional magnetic media, and about 20x the IOps per GB compared to SSDs. an amazing catch up step for technology that has remained rather stagnant for the last 20 years (drive IO per GB)
A few important facts about the release that I summarized from Chads blog (Virtual Geek)
- Software is key, the hardware is inconsequential, but the initial partner vendor is Micron providing a 300GB unit
- Support for a variety of Dell, Cisco, HP and IBM systems, but no Blades yet
- Utilization in a VMware environment ties the VM to a local system, removing vMotion benefits
- Primary use case for v1.0 is extremely high performance requirements, high read cache
Things to look out for, and that are already on the roadmap
- De-Duped Cache (stealing tech from Avamar, Data Domain and Recoverpoint?)
- Better integration with Arrays (VNX, VMAX)
- Distributed Cache (read: VMware clusters operate properly with it?)
- Bigger models
- Mezzanine models for blades
- MLC usage
and this leads ultimately to the evolution of the product line into Project Thunder, another initiative on the cards from EMC that extends VFCache to the network, small 2U or 4U offerings, TB of flash, millions of IO and strong integration with local VFCache systems
Most of the Project Thunder stuff is still under wraps, but it should be a very compelling offering, and an essential piece of larger VDI and heavy IO virtualization strategies, tech preview of this coming Q2 2012, probably at EMC World
I was recently tasked with preparing for a proctored exam that had very limited ‘exam prep’ type material available for it and felt I should pass on some of the lessons learned (It took two attempts to pass) to my readers.
The product was a very niche hardware appliance, and this brings us to our first limitation, exams for hardware based products are hard to get real world exposure to, hardware products are pricey and exclusive and you often can’t just ‘play around with it’ in a Lab. Fortunately for me, I had access to a ‘virtual edition’ of the product that was released a year or so back for smaller scale deployments, thank the stars for virtualization no? Try hard to get some hands on, virtual editions make it easy, but its not always impossible to get some hands on time, reach out to a vendor directly, or a stakeholder in your exam process (you are taking it for a reason right?)
The second problem was, as a niche product, there is very little written material available for it in my normal formats, forums, exam prep guides, technical books etc. That said, this problem was mainly on me, there was a lot of material on line, manuals, white papers, best practice pdf files, but I usually use that sort of stuff as a supplement, not a primary source, so I had to adapt fairly quickly
Hit the vendor websites, read everything you can get your hands on, buried in all the marketechture documents you will find the little gems you need to succeed.
Find an expert, I was lucky enough to have access to one of the vendors technical consultants, and spent an afternoon with him to go over some of the things I was drawing a blank on, this was probably the single most important step I took. I picked up more from an expert in an afternoon than I did reading over 600 pages of material, find someone, buy them lunch, coffee, whatever, make it happen, the results will be amazing!
Don’t be afraid to fail, my exam was really weakly blueprinted by the vendor, I had very little info going in what I would be tested on, which areas were focus, how broad the exam footprint was, don’t be afraid to fail and have to try again, learning from an exam what weaknesses you have can help hone the final study phase on areas you need, if you can click through questions without much thought, you know your stuff, if you resort to guessing, it needs improvement! lots of improvement
My first attempt needed me 10% below the passing baseline, not my finest work, but considerably better than I expected, so I spent the weekend playing with the virtual edition and re-reading some of the areas I know I struggled on, the second result was a pass at over 85%, don’t be afraid to try again!
Success is the ability to go from one failure to another with no loss of enthusiasm. – Winston Churchill (1874 – 1695)
I used to dread failing an exam, and for years I had a perfect record of passes, but I was spending months preparing for tests, even if I knew the content, that just doesn’t work with my work/life balance today, nor the fast changing pace of the industry I operate in
I hope this helps some of you tackle those harder to reach exams, how do you prepare for them?
This weekend I decided to give my aging PC a bit of a make over, I picked up a Corsair SATA 3 120GB SSD and 8GB of Corsair XMS2 DDR2 6400 memory during my first visit to the local Frys (what a dangerous place to go with a debit card!)
I had already decided to rebuild the system, but the new hardware purchases were kind of spur of the moment, my old system (with the Bios dated at 2007) was getting a little long in the tooth, and managed a respectable 6.2 WEI score, but used a mechanical drive and was humming along on 4GB of RAM due to some hardware failures a while back
This system was built in 2006 with a first generation Core 2 Duo 1.86Ghz, 2GB DDR2, a GeForce 7950 graphics card and an Asus P5WDG2 WS Professional board. The board is the critical component here, and while it doesn’t sport newer tech like USB3 or SATAIII, it was an expensive and top of the line board in its day, which is no doubt the reason I have survived so long with it as the core of my system
Over the years numerous upgrades have happened, including the system being moved piece by piece from the UK to the US when I emigrated in 2008
it now sports a 2.4Ghz Core 2 Quad, 8GB DDR2 6400, ATI Radeon 5700 HD, Corsair SATA 3 SSD, and it’s like a completely new system
Windows runs VERY fast off of the SSD, boot times are still not brilliant, owing to the aging BIOS with its old school linear boot cycles, but the system still boots relatively fast, and once in I really feel the difference
Both my laptop and tablet have SSDs in so I was really feeling the sluggishness of my PC, and this minor purchase sure made the difference
I have had a few friends say they really only ever thought of putting SSDs in mobile devices for the battery benefits, and that the capacity was a real challenge for them, but on a desktop with near unlimited drive scalability it’s not hard for me to have a 2TB Data drive (or on my case, 3 of them) and an SSD for the boot device
I install most of my games and larger apps into the Data drive, including Visual Studio SDK files, iTunes music etc, but the critical windows and application components run off of the SSD and perform outstandingly. This is one upgrade I shouldn’t have waited so long for.
For some of us, migrating from Exchange 2003 to Exchange 2010 is an exciting concept, with tons of new features, simpler high-availability features and a lot more power for the users
One of the common overlooked design pieces of a Microsoft Exchange 2010 architecture is the namespace considerations
for most Exchange 2003 environments the following names are usually in play
- mail.mydomain.com – MX Record, mail flow
- webmail.mydomain.com, OWA, OMA, EAS, (Web Services) – Certificate Name
This is not always the case, some people will just use mail.mydomain.com for everything, and this also works great. Your edge configuration will apply certain requirements/restrictions on how you configure your existing namespace, but this is all relatively simple in Exchange 2003 compared to some of the considerations in Exchange 2010.
Most organizations are deploying Exchange 2010 in a highly available configuration, and many are implementing site resilient considerations also, this can lead to a complex namespace design that should be carefully considered and design before the first server is deployed in your organization.
Some things to consider in Exchange 2010 from a high availability standpoint are
- webmail,mydomain.com – Primary point of presence, OWA, OA, EAS, OAB – Certificate Name
Auto discover Service
- autodiscover.mydomain.com – auto configuration URL– Certificate Name
Client Access Arrays
- site-casA.mydomain.com – Internal AD reference to CAS Array for each site
- site-casB.mydomain.com – Internal AD reference to CAS Array for each site
- casA-nlb.mydomain.com – Assigned to VIP of Load balancer for HA CAS – Certificate Name
- casB-nlb.mydomain.com – Assigned to VIP of Load balancer for HA CAS – Certificate Name
- legacy.mydomain.com – Name used for redirection to 2003 during migration – Certificate Name
- webmail2.mydomain.com – alternate internet pointe of presence– Certificate Name
- failbackA.mydomain.com – DNS Failback URL for timeout consideration – Certificate Name
- failbackB.mydomain.com – DNS Failback URL for timeout consideration – Certificate Name
As you can see there is a lot to consider here before jumping in and throwing some servers up, and some of these names may not be required, or can be consolidated with others depending on your edge topology
For more detailed information on namespace design please check out the TechNet article located here
Some of you will know that I am active in a number of regional user groups, in fact, some of you may have found me or my blog by attending one of the events I have spoken at or helped co-ordinate.
The Boise user group scene has kind of dried up over the last few years and I endeavor to help change that. It was always a goal of mine to have an active and vibrant forum for local users to network and discuss topics of interest, and while we are served very well by the local VMware user group (with over 100 people in regular attendance) I feel the general IT scene is underserved still
Recently I assisted Jeff Wilding and some Microsoft Staff kick off the Boise Microsoft Unified Communications User Group by presenting a piece on Exchange migrations and some of the considerations to be made in this space. After assisting him with preparations, and giving my presentation I was asked if I would be interested in taking a larger role in future events and I have committed myself to helping this group succeed.
I also feel now would be a great time to get the Boise IT Pro User Group back up and running with a regular schedule, and with such a broad focus the topics could be endless
If you, or someone you know are interested in this space, and helping out the local IT community do not hesitate to get in touch with Jeff Wilding, Mark Rezansoff or myself
You will notice a new page listed at the top of my blog that will display the most current info I have on a number of regional user groups that I have participated in, as well as any other prudent industry events that may be of interest
I have had queries from a couple of clients of mine regarding the deployment of UAG in a multi platform environment, not only Windows, but Mac OS X, Linux, Mobile devices etc. The demand seems to be for a secure connectivity solution that can handle this sort of bi-modal environment with minimum aggravation to users
one particular client emphasized a client-less solution to meet there needs as they are considered early adopters on the OS front and as we all know, that usually breaks software clients!
UAG seems to be synonymous with Microsoft Direct Access, and as an advanced platform for the deployment of Direct Access, that is an understandable misinterpretation, but UAG is much more than just a heavy duty implementation platform for Direct Access
The Trust Pyramid
As a new generation of users and devices enter the workplace, IT is presented with a set of new and unique challenges, to deliver content anywhere it’s desired to facilitate business needs, but keep it secure and manageable, also for business reasons, but how do we accomplish that when so many devices are not managed? personal cell phones, iPads, home computers? do we just block access from these devices? that’s fast becoming an unavailable option, especially as board level staff are bringing their shiny new iPad to the table.
The Trust pyramid fits nicely with UAGs remote access technologies, as each of them provide a different level of access and control while being deployed and managed from a common platform from an IT perspective
- Direct Access – Windows 7 Enterprise Only, Full, always on network access for the most trusted and managed of systems
- SSL VPN – Multi platform/browser, Configurable access to applications and services for less managed devices such as non domain OS X systems and Linux boxes
- Web Portals – Multi platform/browser, Restricted, specific access to applications for personal devices unknown to the IT department
As part of the pyramid we also take into account what we present, not just how we present it, for instance a user accessing the network via direct access may have full access to LOB and CRM systems, but users coming in on a personal tablet may be limited to non restricted file data and email, by providing separate connectivity mechanisms in this manner, UAG helps us meet the IT governance needs of our organization while also empowering users to do things whatever way is convenient for them.
Aside from Direct Access which I’m sure will have numerous posts of it’s own, SSL VPN connectivity through UAG provide non Windows 7 systems (either via ActiveX for IE sessions, or Java for non IE sessions) seamless access to systems configured to utilize it, this can spread the remote access to non Microsoft devices, and third-party browser software such as Mozilla and Opera. SSL VPNs allow access to desired network services that would otherwise not allow access without a traditional fat-VPN configuration (and the client that goes with it usually). These operate by creating a secure tunnel between your device and the UAG server and then funneling any data appropriate to the connection over the secure tunnel. as this technology utilizes SSL and HTTPS technology there are very few circumstances where it does not work.
Web portals are the most restricted of access methods, providing an interface to access a web application that is fronted by the UAG itself, so users are actually talking to UAG, and in most cases UAG talks to the back end servers on their behalf.
This allows IT to be a little more liberal with the devices they allow access to the portals, as the access is so limited, and provides access to the users that they desire, email, SharePoint, or whatever the corporation deems available.
These can be configured and customized to a high level, even presenting different portals to different sets of users to really fine grain the access to the system.
I keep hearing a lot of confusion as to what UAG is, where it fits, and what it does, so here is a brief introduction to what it does, and what it’s capabilities are.
Forefront Unified Access Gateway 2010 is designed as a gateway into your organization, and utilizes a number of other Microsoft components to enable a seamless and integrated experience for both corporate users, and 3rd parties
- UAG is NOT the same as TMG, nor are the two interchangeable
- UAG is geared toward securely allowing inbound access
- TMG is geared toward protecting internal users from external threats
UAG vs TMG
A lot of confusion arises because UAG installs some TMG components and utilizes them, mainly for array management and firewalling, it cannot however operate as a forward or reverse proxy, nor can it do web filtering or use the active protection components that TMG does
The TMG components built into UAG are there to protect the TMG server, as it is generally afforded a global external address and does not sit behind its own firewall due to the NAT restrictions if you wish to utilize DirectAccess
Microsoft DirectAccess technology allows you to bridge the connections of enterprise endpoints to the corporate network whenever they are online, this is accomplished seamlessly and securely with a combination of IPv6, PKI and IPSEC technologies. This allows users to access resources on the corporate infrastructure safely from anywhere they can get online, as well as providing internal support staff access to roaming systems without requiring them to join special support sessions, install special software, or have the user bring the system into an office
DirectAccess is a technology built into Windows 2008 R2, and can operate without UAG, however there are significant benefits to deploying direct access through a UAG system, including DNS64 and NAT64, both of which are required to allow seamless network access to IPv4 only corporate resources (not just IPv6 ready apps)
UAG provides a user web portal to access applications, services and network resources, as well as integrating with an RDS gateway component if you chose to install that, this portal provides access to numerous devices and can detect the type of device, and the type of experience to deliver. These portals can be customized to fit the clients needs, to display client assets and specifics on a case by case basis
UAG is also capable of VPN termination, this can be via integration with RRAS for PPTN and SSTP tunnels, or via native UAG SSL VPN capabilities
While TMG can also do VPNs, it is not afforded the same SSL VPN capabilities that UAG has, this is another UAG plus point
UAG is the Microsoft recommendation for publishing Microsoft server resources, this is a shift from IAG2007 when MS still pushed ISA2006 as it’s best practice method for securing Exchange and SharePoint web interfaces. If you wish to make services such as outlook web access, outlook anywhere, active sync and SharePoint sites available to your users over the internet, this is the technology to deploy to secure and manage access to those resources.
TMG can still handle this, but many of the upgrades and features that have been added to UAG2010 have not been included in TMGs publishing capabilities, so when publishing SharePoint, Exchange, or even RDS Web Access, UAG is the way to go (reverse proxy requirements are still handled by TMG 2010, this includes OCS and Lync requirements)
UAG has client and server CAL requirements, unlike TMG which is licensed as a server (unless you want all the filtering and protection suites), however ECALS have UAG CALs included, this is good to know for ECAL customers as the majority of the cost is already paid for and you can start benefiting from the technology straight away through a pilot, or implementation engagement
Intel has finally realized a commercial package for it’s light peak initiative, in the form of Thunderbolt. Apple were the first to bring this to bear in the new MacBook Pro lineup announced last week, however Intel have been quick to claim that this will not be an Apple exclusive technology and will be available to other partners and OEMS.
Despite the name, and initial plans, Thunderbolt is currently based on an electrical medium, not an optical one, which shuns away from the initial concept of an optical interconnect for high demand peripherals and buses, but Intel have committed to continuing work on an optical option in the future, stating that results from testing on the electrical side were far better than expected, and keep both costs and complexity down for this initial offering.
Change of plans?
Light Peak was destined to be a transport medium, not a protocol itself, it wasn’t set to replace USB or FireWire, but the physical mediums used to connect these devices. The consensus initially was that USB may well be the protocol of choice, but Intel have opted for a combination of Display port and PCI Express thus far.
This diagram from Intel shows a simplified version of how the technology works
As you can see, the Thunderbolt controllers at both ends (say, a monitor and a MacBook Pro) combine the signals from the two sources to cross a single cable, this allows the single mini display port on a MacBook Pro to provide the video signal to the monitor, as well as other peripheral connectivity. Like USB, the ability to daisy chain these connections is built in, for example, allowing a monitor to have Thunderbolt ports for other connections back to the MacBook Pro
Utilizing PCIe in this manner provides some interesting possibilities, by extending the bus to remote devices there is potential to connect numerous other controllers directly to the PCIe bus on the remote device, and connect seamlessly to the host system via the single Thunderbolt cable. For instance, rather than just finding USB ports on a monitor, a manufacturer could build an entire controller into the monitor for USB, FireWire, eSATA and have those controllers connect to the PCIe bus of the host system via Thunderbolt. This opens up some interesting possibilities in deployment options for vendors, as well as streamlining the way we connect peripherals to the host system (I for one have very few spare ports on the back of my systems at present, a way to streamline more effectively than multiple USB hubs is always appreciated!
The downside to this is obviously the extension of the PCIe bus outside of the host system, which has already caused some parties to claim security concerns, although this is no different than with existing bus extension technologies that operate at such low hardware layers, such as Express Card and FireWire.
Lots of bits, not a lot of cable
The most staggering achievement of the new technology is the bandwidth it brings to consumer devices, each Thunderbolt port provides two full duplex, bi-directional 10Gbps channels totaling 40Gbps, although only adds display port 1.1a support on top of this, rather than the newer 1.2 standard, even so, this amounts to a combined total of almost 60Gbps of bandwidth, from that single port!
The potential for this technology is quite astounding, and with bandwidth like that there are a myriad of new ways of approaching connectivity that could be imagined, however the standard at present is an Intel only offering, requiring the purchase of controllers from Intel, this itself could hinder the protocols adoption by third parties, especially ones loyal to competitors such as AMD, which would ultimately undermine the growth of the standard.
Look out for compatible devices from Promise and Lacie already announced, as well as other vendors in the near future
New York City recently appointed Rachel Sterne as their Chief Digital Officer (CDO), tasked with helping the City improve how it communicates with residents using modern communication mediums and social media
An interesting appointment for sure, traditionally the social media banner has been trumpeted by the CMO and the marketing department, sometimes well, and sometimes exceedingly poorly as anybody who has been on Twitter for longer than a few years can attest to. But does the appointment signify a shift in thinking about the way we approach and utilize social media? traditionally they have been seen as lucrative avenues for marketing, utilizing crowdsourcing and word of mouth to promote from within the target audiences trusted influencers. More recently a public relations and customer service avenue has been tackled with the likes of Twitter and Facebook providing users an avenue to comment and receive feedback from the organizations they do business with, but with the social setting of such scrutiny the willingness of corporations to go down this path has been slow and riddled with troubles if not done properly with the right people at the helm.
Utilizing social media for effective communication back to the masses is one of the next hurdles for social media to tackle, finally turning the technology into a truly duplexed conversation and not just a broadcast platform for the masses
Rachels appointment has raised some concerns, around her credentials, the position itself and what exactly it hopes to achieve, but I for one am interested to see the outcome of her tenure and what achievements and changes lie ahead for New York City, and their new CDO