Monthly Link roundup – November 2015

You can sign up below to receive the list by mail as it’s compiled each month



Modern Datacenter



Understanding the Microsoft Support Lifecycle

As a consultant, I am often put in the position of reminding customers to be conscious of support lifecycles when making strategic decisions on upgrades and internal product lifecycles, today I was asked to help a customer plan a migration from Exchange 2003 to Exchange 2010, and I felt I should write a little bit about this, given that Exchange 2010 just exited mainstream support.

Microsoft itself has a very detailed set of lifecycle guidelines laid out here, but there is some understandable confusion by many, as to what this actually means for them

There are three main phases to the support lifecycle, Mainstream Support, Extended Support and Service Pack Support.  Each of these phases has a set of guidelines defining their duration, and the type and availability of various support during each phase.

Mainstream Support

This is generally the important one, and the one that most people associate with the general supportability of a product in broad use, it starts when the product ships, and MS guarantee this phase for a minimum of five years (at the current service pack level) for Business, Developer and Desktop Operating System products.  This guarantee is different for hardware and consumer software products

Extended Support

This phases is also guaranteed for a minimum of five years (bringing the support window to a total of 10 years) for Business, Developer and Desktop Operating system products, but it has some caveats.

In extended support the limitations include

  • no longer request changes / features to a product
  • non-security hotfixes limited to customers with an extended hotfix support plan
  • no complimentary support

The good thing is, your product is still supported, which is great for organizations not agile enough to upgrade frequently, or with 3rd party tooling that hampers platform upgrades from occurring in a timely fashion.  The real world consequences are that the product becomes more complicated and expensive to support, increasing its overall lifecycle operational expense (OPEX) and diminishing it’s ROI over time.  This isn’t necessarily a bad thing, if you are aware of this from the outset and fit it into your calculations, your product is still valid, will still receive critical security updates, and support can still be acquired for a cost.  At this point in a products lifecycle, I can’t however recommend you move to it, it’s already on the way out, and likely has one or two successors in play, and you should weigh the considerations out very carefully.

Service Pack Support

This one can get a bit confusing, it does not replace mainstream or extended support, but when a new service pack is released, for the majority of products, Microsoft will provide support for the previous version for 12 months (24 for Dynamics and Client & Server Operating Systems), what this means is that while your product is still in support, if you are on an out of date service pack (older than 12 months) you don’t actually meet the requirements to receive certain support, and the first suggestion is going to be ‘make sure you have the latest service pack’

Microsoft also no longer release features or updates for platforms running older service packs (you should still see security updates though).

In the event that a service pack is the final service pack of a platform, the rules governing mainstream and extended support apply, take the example below


Exchange 2010 was released on November 9th 2009, starting the support lifecycle counter

Service Pack 1 for Exchange 2010 was released August 23rd 2010, starting the 12 month countdown for RTM to fall out of support, if after ~August 23rd 2011 (exact dates vary) you still had RTM in play, you were out of support and would see limited availability of builds after this (in fact, the last roll up update for Exchange 2010 RTM was December 2010)

Service Pack 2 for Exchange 2010 dropped on December 4th 2011, starting the count down for SP1 support to end twelve months hence, and sure enough the last roll up for Exchange 2010 w/SP1 was December 10th, 2012

As Service Pack 3 was the final Exchange 2010 service pack, released on February 12th 2013, it remains supported until mainstream, and then extended support end, albeit with the caveats mentioned below. As long as you are running SP3 on Exchange 2010, you will have extended support until 2019, however Mainstream support ended for this product January 13th 2015

Hopefully this helps tables such as this one below, make a little more sense, and with Windows Server 2008, Windows Server 2008 R2, Windows 7, Exchange 2010, and Unified Access Gateway 2010, all exiting mainstream support this year, as well as the end of extended support for a number of well deployed platforms, It’s a good time to understand your support lifecycle information, and plan accordingly.

2015-01-27_14-00-54Exchange lifecycle information from the Microsoft lifecycle portal



Introduction to SMB3

Server Message Block (and often, incorrectly referred to as CIFS) has been the mainstay of windows file servers since the days of NetBIOS, but version 2, and the improvements it brought with it were released in 2006 with Windows Vista and Server 2008 While SMB2 brought with it some much needed improvements, including limiting the chattiness of SMB over the wire (essential in WAN environments), it still lacked the and capabilities (and thus enterprise approval) of NFS, during the time between SMB2 and SMB3 we have seen users migrate away from Windows file servers to array based file storage, away from block storage to NFS for virtualization, but all that is about to change.

Server Message Block 3

With the advent of Windows Server 2012, Microsoft has released SMB3, far more than a mere upgrade to SMB2, SMB3 features important new enterprise features, rich client capabilities, and performance through roof, a far cry from its predecessors

SMB Direct (RDMA)

Utilizing RDMA network devices and SMB Direct, SMB can bypass the NIC and transport layer drivers and communicate directly with the RDMA NIC, this bypass increases performance and lower latency significantly to near wire speeds, and with InfiniBand connectivity those wire speeds can comfortably reach 50GBps on a single port.  SMB Direct can be coupled with SMB Multichannel to provide a reliable and highly available network topology for low latency file server access, enabling the desired file level application support we are increasingly seeing. As file servers become larger and larger with files often nearing the billions, this improvement helps overcome one of the major bottlenecks of file server performance.

SMB Multichannel

While SMB Direct allows for low latency and high throughput RDMA links, without SMB Multichannel it would still lack a certain enterprise comfort level, SMB failures have always been interrupting to users at the least, and catastrophic at worst. SMB Multichannel allows for seamless use of all network interfaces (can be combined with network teaming) with a near linear performance improvement (demos at the Build conference had 4 10Gbe ports pulling 4.5GB/s of throughput. Even a single NIC that supports Receive Side Scaling (RSS) can benefit from the new multichannel capabilities by establishing multiple TCP connections that allow load balancing of any CPU load across cores and CPUs, rather than the single core affinity used by a single TCP connection. When pushing a lot of small I/O over a large interface such as 10Gbe this becomes essential. Clients that support SMB3 will automatically utilize multiple channels when RSS is configured, and multiple NICS when they are available.

SMB Application Shares

With the combination of the two new SMB features listed above, and the myriad of improvements to networking and storage in Windows Server 2012 we finally have the capability to provide certain enterprise applications with file level storage. This move vastly simplifies already complex enterprise application deployments by abstracting a lot of low end storage architecture away from the application architecture, while giving us yet another option for storage of large, complex and performance demanding systems. Currently SQL Server 2012 and Hyper-V 2012 VHDX files are supported in an SMB3 environment, and application shares provide the performance and availability inherent with SMB Direct and SMB Multichannel at an SMB cluster level, providing us a single namespace spanning multiple servers.


All of these features and capabilities are helping bring the file server back to a Windows server, and although major vendors such as EMC and NetApp will be supporting SMB3, it is unknown if they will support the full gamut of features and capabilities, or the timeframe to reach this level of compatibility.

As file systems get larger and larger and our hunger for data ever increases, it becomes that much more critical that our file server infrastructure can scale, and perform to meet our demands. Windows Server 2012 and SMB3 help us get there, today.

Project Lightning (aka VFCache)

EMC today officially launched VFCache, the project previously known as Lightning

VFCache is a host side PCIe SSD product not totally dissimilar to products from Fusion-io in its mechanical operation, but possessing unification with the rest of the EMC suite of products, adding significant value to this version 1.0 offering, software is key here!

At its heart, VFCache allows IO to occur over the PCIe bus at lightning (no pun) speeds, approaching 4000x the IOps per GB than traditional magnetic media, and about 20x the IOps per GB compared to SSDs.  an amazing catch up step for technology that has remained rather stagnant for the last 20 years (drive IO per GB)

A few important facts about the release that I summarized from Chads blog (Virtual Geek)

  • Software is key, the hardware is inconsequential, but the initial partner vendor is Micron providing a 300GB unit
  • Support for a variety of Dell, Cisco, HP and IBM systems, but no Blades yet
  • Utilization in a VMware environment ties the VM to a local system, removing vMotion benefits
  • Primary use case for v1.0 is extremely high performance requirements, high read cache

Things to look out for, and that are already on the roadmap

  • De-Duped Cache (stealing tech from Avamar, Data Domain and Recoverpoint?)
  • Better integration with Arrays (VNX, VMAX)
  • Distributed Cache (read: VMware clusters operate properly with it?)
  • Bigger models
  • Mezzanine models for blades
  • MLC usage

and this leads ultimately to the evolution of the product line into Project Thunder, another initiative on the cards from EMC that extends VFCache to the network, small 2U or 4U offerings, TB of flash, millions of IO and strong integration with local VFCache systems

Most of the Project Thunder stuff is still under wraps, but it should be a very compelling offering, and an essential piece of larger VDI and heavy IO virtualization strategies, tech preview of this coming Q2 2012, probably at EMC World

Preparing for Exams with limited material

I was recently tasked with preparing for a proctored exam that had very limited ‘exam prep’ type material available for it and felt I should pass on some of the lessons learned (It took two attempts to pass) to my readers.

The product was a very niche hardware appliance, and this brings us to our first limitation, exams for hardware based products are hard to get real world exposure to, hardware products are pricey and exclusive and you often can’t just ‘play around with it’ in a Lab.  Fortunately for me, I had access to a ‘virtual edition’ of the product that was released a year or so back for smaller scale deployments, thank the stars for virtualization no? Smile  Try hard to get some hands on, virtual editions make it easy, but its not always impossible to get some hands on time, reach out to a vendor directly, or a stakeholder in your exam process (you are taking it for a reason right?)

The second problem was, as a niche product, there is very little written material available for it in my normal formats, forums, exam prep guides, technical books etc.  That said, this problem was mainly on me, there was a lot of material on line, manuals, white papers, best practice pdf files, but I usually use that sort of stuff as a supplement, not a primary source, so I had to adapt fairly quickly

Hit the vendor websites, read everything you can get your hands on, buried in all the marketechture documents you will find the little gems you need to succeed.

Find an expert, I was lucky enough to have access to one of the vendors technical consultants, and spent an afternoon with him to go over some of the things I was drawing a blank on, this was probably the single most important step I took.  I picked up more from an expert in an afternoon than I did reading over 600 pages of material, find someone, buy them lunch, coffee, whatever, make it happen, the results will be amazing!

Don’t be afraid to fail, my exam was really weakly blueprinted by the vendor, I had very little info going in what I would be tested on, which areas were focus, how broad the exam footprint was, don’t be afraid to fail and have to try again, learning from an exam what weaknesses you have can help hone the final study phase on areas you need, if you can click through questions without much thought, you know your stuff, if you resort to guessing, it needs improvement! lots of improvement Smile

My first attempt needed me 10% below the passing baseline, not my finest work, but considerably better than I expected, so I spent the weekend playing with the virtual edition and re-reading some of the areas I know I struggled on, the second result was a pass at over 85%, don’t be afraid to try again!

Success is the ability to go from one failure to another with no loss of enthusiasm. – Winston Churchill (1874 – 1695)

I used to dread failing an exam, and for years I had a perfect record of passes, but I was spending months preparing for tests, even if I knew the content, that just doesn’t work with my work/life balance today, nor the fast changing pace of the industry I operate in

I hope this helps some of you tackle those harder to reach exams, how do you prepare for them?

6 years and rocking a 7.1 WEI

This weekend I decided to give my aging PC a bit of a make over, I picked up a Corsair SATA 3 120GB SSD and 8GB of Corsair XMS2 DDR2 6400 memory during my first visit to the local Frys (what a dangerous place to go with a debit card!)

I had already decided to rebuild the system, but the new hardware purchases were kind of spur of the moment, my old system (with the Bios dated at 2007) was getting a little long in the tooth, and managed a respectable 6.2 WEI score, but used a mechanical drive and was humming along on 4GB of RAM due to some hardware failures a while back

This system was built in 2006 with a first generation Core 2 Duo 1.86Ghz, 2GB DDR2, a GeForce 7950 graphics card and an Asus P5WDG2 WS Professional board.  The board is the critical component here, and while it doesn’t sport newer tech like USB3 or SATAIII, it was an expensive and top of the line board in its day, which is no doubt the reason I have survived so long with it as the core of my system

Over the years numerous upgrades have happened, including the system being moved piece by piece from the UK to the US when I emigrated in 2008

it now sports a 2.4Ghz Core 2 Quad, 8GB DDR2 6400, ATI Radeon 5700 HD, Corsair SATA 3 SSD, and it’s like a completely new system


Windows runs VERY fast off of the SSD, boot times are still not brilliant, owing to the aging BIOS with its old school linear boot cycles, but the system still boots relatively fast, and once in I really feel the difference

Both my laptop and tablet have SSDs in so I was really feeling the sluggishness of my PC, and this minor purchase sure made the difference

I have had a few friends say they really only ever thought of putting SSDs in mobile devices for the battery benefits, and that the capacity was a real challenge for them, but on a desktop with near unlimited drive scalability it’s not hard for me to have a 2TB Data drive (or on my case, 3 of them) and an SSD for the boot device

I install most of my games and larger apps into the Data drive, including Visual Studio SDK files, iTunes music etc, but the critical windows and application components run off of the SSD and perform outstandingly.  This is one upgrade I shouldn’t have waited so long for.

Exchange 2010 Namespace considerations

For some of us, migrating from Exchange 2003 to Exchange 2010 is an exciting concept, with tons of new features, simpler high-availability features and a lot more power for the users

One of the common overlooked design pieces of a Microsoft Exchange 2010 architecture is the namespace considerations

Legacy Environments

for most Exchange 2003 environments the following names are usually in play

  • – MX Record, mail flow
  •, OWA, OMA, EAS, (Web Services) – Certificate Name

This is not always the case, some people will just use for everything, and this also works great.  Your edge configuration will apply certain requirements/restrictions on how you configure your existing namespace, but this is all relatively simple in Exchange 2003 compared to some of the considerations in Exchange 2010.

Exchange 2010

Most organizations are deploying Exchange 2010 in a highly available configuration, and many are implementing site resilient considerations also, this can lead to a complex namespace design that should be carefully considered and design before the first server is deployed in your organization.

Some things to consider in Exchange 2010 from a high availability standpoint are

Internet Presence

  • webmail, – Primary point of presence, OWA, OA, EAS, OAB – Certificate Name

Auto discover Service

  • – auto configuration URL– Certificate Name

Client Access Arrays

  • – Internal AD reference to CAS Array for each site
  • – Internal AD reference to CAS Array for each site
  • – Assigned to VIP of Load balancer for HA CAS – Certificate Name
  • – Assigned to VIP of Load balancer for HA CAS – Certificate Name


  • – Name used for redirection to 2003 during migration – Certificate Name

Site Resiliency

  • – alternate internet pointe of presence– Certificate Name

Failback URLs

  • – DNS Failback URL for timeout consideration – Certificate Name
  • – DNS Failback URL for timeout consideration – Certificate Name

As you can see there is a lot to consider here before jumping in and throwing some servers up, and some of these names may not be required, or can be consolidated with others depending on your edge topology

For more detailed information on namespace design please check out the TechNet article located here

The Power of Community – Usergroups

Some of you will know that I am active in a number of regional user groups, in fact, some of you may have found me or my blog by attending one of the events I have spoken at or helped co-ordinate.

The Boise user group scene has kind of dried up over the last few years and I endeavor to help change that.  It was always a goal of mine to have an active and vibrant forum for local users to network and discuss topics of interest, and while we are served very well by the local VMware user group (with over 100 people in regular attendance) I feel the general IT scene is underserved still

Recently I assisted Jeff Wilding and some Microsoft Staff kick off the Boise Microsoft Unified Communications User Group by presenting a piece on Exchange migrations and some of the considerations to be made in this space.  After assisting him with preparations, and giving my presentation I was asked if I would be interested in taking a larger role in future events and I have committed myself to helping this group succeed.

I also feel now would be a great time to get the Boise IT Pro User Group back up and running with a regular schedule, and with such a broad focus the topics could be endless

If you, or someone you know are interested in this space, and helping out the local IT community do not hesitate to get in touch with Jeff Wilding, Mark Rezansoff or myself

Regional Groups

You will notice a new page listed at the top of my blog that will display the most current info I have on a number of regional user groups that I have participated in, as well as any other prudent industry events that may be of interest

UAG in a Multi-Platform world

I have had queries from a couple of clients of mine regarding the deployment of UAG in a multi platform environment, not only Windows, but Mac OS X, Linux, Mobile devices etc.  The demand seems to be for a secure connectivity solution that can handle this sort of bi-modal environment with minimum aggravation to users

one particular client emphasized a client-less solution to meet there needs as they are considered early adopters on the OS front and as we all know, that usually breaks software clients!

UAG seems to be synonymous with Microsoft Direct Access, and as an advanced platform for the deployment of Direct Access, that is an understandable misinterpretation, but UAG is much more than just a heavy duty implementation platform for Direct Access

The Trust Pyramid

As a new generation of users and devices enter the workplace, IT is presented with a set of new and unique challenges, to deliver content anywhere it’s desired to facilitate business needs, but keep it secure and manageable, also for business reasons, but how do we accomplish that when so many devices are not managed? personal cell phones, iPads, home computers? do we just block access from these devices? that’s fast becoming an unavailable option, especially as board level staff are bringing their shiny new iPad to the table.

The Trust pyramid fits nicely with UAGs remote access technologies, as each of them provide a different level of access and control while being deployed and managed from a common platform from an IT perspective

  • Direct Access – Windows 7 Enterprise Only, Full, always on network access for the most trusted and managed of systems
  • SSL VPN – Multi platform/browser, Configurable access to applications and services for less managed devices such as non domain OS X systems and Linux boxes
  • Web Portals – Multi platform/browser, Restricted, specific access to applications for personal devices unknown to the IT department

As part of the pyramid we also take into account what we present, not just how we present it, for instance a user accessing the network via direct access may have full access to LOB and CRM systems, but users coming in on a personal tablet may be limited to non restricted file data and email, by providing separate connectivity mechanisms in this manner, UAG helps us meet the IT governance needs of our organization while also empowering users to do things whatever way is convenient for them.


Aside from Direct Access which I’m sure will have numerous posts of it’s own, SSL VPN connectivity through UAG provide non Windows 7 systems (either via ActiveX for IE sessions, or Java for non IE sessions) seamless access to systems configured to utilize it, this can spread the remote access to non Microsoft devices, and third-party browser software such as Mozilla and Opera.  SSL VPNs allow access to desired network services that would otherwise not allow access without a traditional fat-VPN configuration (and the client that goes with it usually).  These operate by creating a secure tunnel between your device and the UAG server and then funneling any data appropriate to the connection over the secure tunnel.  as this technology utilizes SSL and HTTPS technology there are very few circumstances where it does not work.

Web Portals

Web portals are the most restricted of access methods, providing an interface to access a web application that is fronted by the UAG itself, so users are actually talking to UAG, and in most cases UAG talks to the back end servers on their behalf.

This allows IT to be a little more liberal with the devices they allow access to the portals, as the access is so limited, and provides access to the users that they desire, email, SharePoint, or whatever the corporation deems available.

These can be configured and customized to a high level, even presenting different portals to different sets of users to really fine grain the access to the system.

Forefront Unified Access Gateway 2010, what’s that then?

I keep hearing a lot of confusion as to what UAG is, where it fits, and what it does, so here is a brief introduction to what it does, and what it’s capabilities are.
Forefront Unified Access Gateway 2010 is designed as a gateway into your organization, and utilizes a number of other Microsoft components to enable a seamless and integrated experience for both corporate users, and 3rd parties

  • UAG is NOT the same as TMG, nor are the two interchangeable
  • UAG is geared toward securely allowing inbound access
  • TMG is geared toward protecting internal users from external threats

A lot of confusion arises because UAG installs some TMG components and utilizes them, mainly for array management and firewalling, it cannot however operate as a forward or reverse proxy, nor can it do web filtering or use the active protection components that TMG does

The TMG components built into UAG are there to protect the TMG server, as it is generally afforded a global external address and does not sit behind its own firewall due to the NAT restrictions if you wish to utilize DirectAccess

Direct Access

Microsoft DirectAccess technology allows you to bridge the connections of enterprise endpoints to the corporate network whenever they are online, this is accomplished seamlessly and securely with a combination of IPv6, PKI and IPSEC technologies.  This allows users to access resources on the corporate infrastructure safely from anywhere they can get online, as well as providing internal support staff access to roaming systems without requiring them to join special support sessions, install special software, or have the user bring the system into an office

DirectAccess is a technology built into Windows 2008 R2, and can operate without UAG, however there are significant benefits to deploying direct access through a UAG system, including DNS64 and NAT64, both of which are required to allow seamless network access to IPv4 only corporate resources (not just IPv6 ready apps)

Remote Access

UAG provides a user web portal to access applications, services and network resources, as well as integrating with an RDS gateway component if you chose to install that, this portal provides access to numerous devices and can detect the type of device, and the type of experience to deliver.  These portals can be customized to fit the clients needs, to display client assets and specifics on a case by case basis

UAG is also capable of VPN termination, this can be via integration with RRAS for PPTN and SSTP tunnels, or via native UAG SSL VPN capabilities

While TMG can also do VPNs, it is not afforded the same SSL VPN capabilities that UAG has, this is another UAG plus point

Server Publishing

UAG is the Microsoft recommendation for publishing Microsoft server resources, this is a shift from IAG2007 when MS still pushed ISA2006 as it’s best practice method for securing Exchange and SharePoint web interfaces.  If you wish to make services such as outlook web access, outlook anywhere, active sync and SharePoint sites available to your users over the internet, this is the technology to deploy to secure and manage access to those resources.

TMG can still handle this, but many of the upgrades and features that have been added to UAG2010 have not been included in TMGs publishing capabilities, so when publishing SharePoint, Exchange, or even RDS Web Access, UAG is the way to go (reverse proxy requirements are still handled by TMG 2010, this includes OCS and Lync requirements)


UAG has client and server CAL requirements, unlike TMG which is licensed as a server (unless you want all the filtering and protection suites), however ECALS have UAG CALs included, this is good to know for ECAL customers as the majority of the cost is already paid for and you can start benefiting from the technology straight away through a pilot, or implementation engagement