Hyper-V technology has gained significant traction in many virtualized computing environments with customers across a wide spectrum - from small to medium business to large enterprise companies. By building its virtualization platform directly into Windows Server, Microsoft has fostered Hyper-V adoption and made it easy for customers to create and manage a virtualized computing environment. The next question you might ask is - where and how does software-defined storage fit into this?
One of the advantages of moving to a hypervisor-based compute environment is the ability to form a highly available cluster that supports application resiliency. A prerequisite for any failover cluster is to have shared storage accessible to all hosts in the cluster. Storage platforms that deliver seamless high availability and built-in DR across multiple sites can complement a Hyper-V failover cluster, enabling it to do what it does best.
Enter Hedvig! Hedvig is a scale-out distributed storage platform with real-time multi-site replication, providing protection of data and also delivering maximum availability in the event of server or data center outages. Hedvig’s hyperscale architecture is ideal for Hyper-V virtualization deployments, providing highly elastic and flexible storage that can be scaled independently of compute. The Hedvig software-defined solution runs completely on commodity hardware, reducing enterprise storage costs while modernizing businesses.
Reference architecture for Hyper-V
Microsoft Windows Server 2012 ships with a multitude of server roles and technologies. The focus of this blog is the Hyper-V role. A number of Windows Servers running Hyper-V roles can be pooled together to form a highly available Hyper-V cluster by leveraging the Windows Failover Clustering feature. The combination of virtualization and failover clustering provides continuous access to applications and services, thereby eliminating any single point of failure.
The Hedvig + Hyper-V reference architecture illustrates the Hedvig Distributed Storage Platform operating in conjunction with a Hyper-V Failover Cluster. Hedvig’s scale-out storage software installs on standard x86 servers (also known as cluster nodes) and can span a storage cluster across multiple data centers and/or clouds that network together to form a storage resource pool. The compute tier (Hyper-V hosts) accesses the storage resource pool via a Hedvig software client called the Hedvig Storage Proxy.
The Hedvig Storage Proxy is a component that resides completely in the compute tier. In this reference architecture, each Hyper-V host has its own storage proxy running as a virtual machine. The storage proxy masquerades as an iSCSI target, providing storage to the corresponding Hyper-V host in the form of one or more block devices. The storage proxy is completely stateless and runs as an HA pair.
Scale-out storage for Hyper-V failover clusters
Hedvig leverages Windows Failover Clustering features such as Cluster Shared Volumes (CSV) and Scale-Out File Server (SOFS) to seamlessly integrate with Hyper-V Failover Cluster, providing an end-to-end fault-tolerant highly available solution. Before discussing how to set up this solution, the terms CSV and SOFS will be defined.
Clustered Shared Volumes (CSV)
Clustered Shared Volumes (CSV) is a Windows Failover Clustering feature that brings in a layer of abstraction between the Hyper-V applications and the underlying storage, thereby allowing multiple nodes in the failover cluster simultaneous access to the same disk (LUN) provisioned as an NTFS volume. CSV enables quick failover between nodes without requiring unmounting and remounting a volume. CSV simplifies the management of a large number of disks by using a consistent file namespace (for example, C:\ClusterStorage\VolumeX) across each of the cluster nodes in the failover cluster.
The ownership of CSV is managed by failover cluster nodes through SCSI-3 Persistent Reservations. Since each cluster node accesses the storage resource through its own local Hedvig Storage Proxy, which masquerades as an iSCSI target, SCSI-3 Persistent Reservations on disks are tracked at a Hedvig cluster level, as opposed to the storage proxy level. CSV ownership in Windows Server 2012 R2 is automatically distributed and rebalanced across the failover cluster nodes.
Scale-Out File Server (SOFS)
SOFS is a Windows Failover Clustering feature that provides continuously available scale-out file shares for file-based server application storage. SOFS is predominantly designed for high usage, always-open files such as virtual machines and SQL databases. CSV can be mapped to scale-out file shares that can be shared among multiple nodes in a failover cluster.
After the file shares are set up, they can be used as backend storage by Hyper-V or SQL server hosts to store *.vhdx or SQL server databases files. If an SOFS storage node fails while the application is trying to access the storage, the SMB client on the application’s host is notified, and it selects the next available SOFS storage node to connect and resume IO operations. The transparent failover capability of SOFS, coupled with the high availability guarantees from the underlying Hedvig storage platform, makes this a highly available storage solution.
SOFS setup with Hedvig
The first step in setting up a Scale-Out File Server with Hedvig is to provision storage on the Hedvig cluster. Storage assets are provisioned on Hedvig as an abstraction called a virtual disk. Storage policies such as compression, deduplication and client-side caching can be enabled at per virtual disk granularity. You can choose storage policies specific to the type of workload or application you plan to run on these virtual disks.
The next step is to present these virtual disks as iSCSI LUNs to the storage proxies running on each cluster node in the failover cluster. To enable a cluster node to consume iSCSI LUNs only from its local storage proxy, you also need to update the ACL information on each storage proxy. ACLs can also let you isolate iSCSI LUNs among specific cluster nodes running the Hyper-V or SQL Server roles.
Using the Microsoft iSCSI initiator on each cluster node, login to the iSCSI target to connect to the provisioned virtual disks. After all disks are connected, bring these disks online through the Disk Management console. The next step is to format these disks with the NTFS filesystem. It is not necessary to mount these disks or to assign a drive letter after formatting.
A prerequisite for assigning any provisioned storage to clustered roles is to pass the Failover Cluster Validation tests for storage. Failover cluster validation tests verify whether the underlying storage array supports failover cluster operations. The following is a snapshot of the failover cluster validation test results for Hedvig iSCSI LUNs.
After all cluster validation tests pass, you can add the NTFS volumes to the failover cluster. When you add NTFS volumes to the failover cluster, they appear as clustered volumes. However, to enable seamless and quick failover between cluster nodes, you must add these clustered volumes to Clustered Shared Volumes (CSV). When you add a clustered volume to a clustered shared volume, you will be asked to choose a filesystem path where it will be mounted on each cluster node. The default path for a clustered shared volume is C:\ClusterStorage\<VolumeName>.
The last step is to configure SOFS. After CSV is set up, create a new File Server role with the “Scale-Out File Server for application data” option using the Failover Cluster Manager. This will set up SOFS without any shares. Use the “Add File Share” option under the SOFS role to map each of the CSVs as a file share under SOFS. The typical naming convention for an SOFS share is \\<SOFS-Role-Name>\<VolumeName>. This greatly simplifies the management of SOFS shares and their corresponding cluster shared volumes.
SOFS file shares should now be accessible from all nodes in the failover cluster. You can now use these file shares for deploying Hyper-V virtual machine data and SQL Server databases over SMB.
I invite you to watch a live demo of how this solution is put together end-to-end. Click on the link below to watch the video. If you are looking to deploy a virtualized computing environment using Hyper-V, while simultaneously reaping the benefits of a modern storage infrastructure, talk to us!