Openly delivering on the software-defined storage promise

By Mat Gruen | | Cloud

As the saying goes at Hedvig, “We’re hardware agnostic, but hardware does matter.”

We work with customers to build our Hedvig Distributed Storage Platform using servers with an appropriate mix of CPUs, memory, flash, and hard drives. Of all the server architectures we see in our install base and pipeline, the ones that are growing fastest are those from the Open Compute Project (OCP). We recently completed a functionality test of our the Hedvig Distributed Storage Platform in the Disaggregate Lab at Facebook.

Read on to learn about our test configuration, results, and thoughts on emerging OCP options for software-defined storage.


In over 15 years in the storage industry, I’ve seen many trends:

  • Centralized storage providing high availability, performance and efficiency
  • Replication for enhanced disaster recovery
  • Efficiency 2.0 with thin provisioning, deduplication, and compression
  • Performance 2.0 with solid state drives

In this age of the software-defined datacenter, multiple software-defined storage (SDS) products are attempting to deliver on the SDS promise of providing agility through scalable enterprise functionality with the economics of commodity servers. Here at Hedvig we think we’re fulfilling that promise. Key differentiators for Hedvig include the ability to implement a choice of all three industry storage paradigms (block, file, and object) and offer enterprise functionality such as deduplication or client-side caching on a per virtual disk (vDisk) basis. Hedvig is truly hardware agnostic, with customers running on servers from companies including HPE, Dell, Cisco, Supermicro, and QCT.


We see a lot of interest from customers to deploy Hedvig on OCP-compliant servers from many of the vendors mentioned above. To get better insight into how our software runs on OCP servers, we tested the following configuration in the Disaggregate Lab:

3x Hedvig Storage Nodes with the following specs:

  • 2×18 Core CPUs
  • 256 GB RAM
  • 4×1 TB NVMe (AVA Card)
  • 1xmSATA boot drive
  • 1×10 GbE
  • 10×4 TB SAS Drives (Knox JBOD)

1x Leopard Server running VMware ESXi running:

  • 2x Hedvig Storage Proxy (HA Mode)
  • 1x Vdbench server for load

What we found was:

  • Our automated software installation process properly installed all components of the software.
  • Performance on the OCP platform was consistent with similarly-equipped traditional hardware components.
  • Failure testing was handled appropriately – failed components were identified and data re-routed transparently to application hosts.

We did make a change to our approach for locating drives in OCP hardware. Specifically, we discovered that the popular “lshw” utility does not return the correct drive slot mappings for OCP JBODs. We were able to work around this limitation by utilizing information from the RAID controller itself. I will cover this in more detail at my talk at the OCP US Summit 2017.

To me, the promise of SDS is agility – both in terms of customers being able to adapt a product to their changing demands and for the product itself to adapt and integrate with new hardware and software paradigms.

To learn more come join me at the OCP Summit. I’m speaking in the Software-Defined Storage Architectures on Open Compute session on March 9th. I’ll be on a panel hosted by Michael Liberte and will go into more detail about the Hedvig configuration and our OCP testing.

Or better yet, get started! If you’d like to run a modern storage solution on OCP, please reach out – we’re open for that business!

Get Started