HyperV Server 2012 R2 – Shared VHDX (TechEd Europe part 3)

When building Windows Clusters, one of the least flexible requirements has always been the centralised storage. iSCSI disks f.e. were needed as a quorom / witness resource and to put your application data on. In modern scenario’s with multi-tenant environments that is however not something a storage admin gets happy or excited about. LUNs have to be masked, storage firewalls have to be used (to avoid a client machine from using other ports than only the allowed iSCSI ports f.e.) or even CHAP-authentication had to be implemented.

In our own hosting environment there’s a storage firewall cluster in place with its own frontend and backend VLANs and physically dedicated ethernet cabling (to make sure storage traffic would never be able to impact frontend applicative traffic and client request performance). A costly investment… Read more of this post

Windows Server 2012 R2 – what’s new?

Microsoft just announced the R2 update of Windows Server 2012. Although 2012 was already another big step from 2008R2 and very feature complete, there’s always room for improvement… and the previous 3 releases I must admit I have been very pleasantly surprised each time with more ease of use, features and stability.

So what can we look forward to? These are my favorites:

1. Further stability and performance progress with HyperV

  • Some annoying shortcomings compared to market leader Vmware are finally getting crushed: previously memory thin provisioning was already a major jump forward but now we’re also getting live virtual disk expansion and shared ISOs should no longer block live migration.
  • Shared VHDX is paving the way for virtual clusterdisks. Finally we can backup clusters through the hypervisor with products like Veeam as the current setup with iSCSI disks still had to be backed up through pain-in-the-ass agent-based backups or storage snapshotting while the rest of all your servers were nicely snapshotted and backed–up with a 100% success ratio. Read more of this post

Cisco Catalyst LACP-based port config for HyperV NIC load balancing

10 Gbit/s switches are slowly becoming affordable but still I see scenario’s where HyperV servers are disclosed via gigabit. To get enough bandwith to run a lot of machines, get your SAN storage traffic across and do live migrations, you need link aggregation to get multi-gigabit speeds.

There are a few mechanisms available in HyperV to use multiple NICs for load-balancing or failover scenarios. If your HyperV 2012 servers are attached to Cisco switches, then one of the most interesting (i.m.h.o.) is the use of LACP and transportports.

Read more of this post