The end of custom hardware and the rise of standardized storage

By Eric Carter | | Software-defined Storage

With the software-defined data center market expected to hit $77.2 billion in the next four years, we can expect traditional proprietary hardware vendors to continue struggling to differentiate their products and stem the tide of cloud computing converts threatening their core business.

The new technology that hardware vendors create will matter considerably less than the speed with which innovations can be rolled out to the consumers. This is why custom hardware is on the wane and standardized platforms where functionality and differentiation are driven by software is on the rise.


As I have written about before, the value of standardized, commodity infrastructure is that it lets you readily take advantage of increased capability – often at the same or lower price points than you might have paid just a few short months previously. Web-scale companies figured out this recipe fairly quickly and shifted years ago to this approach for infrastructure. Now it is making its way into the enterprise.

The challenge faced by vendors who engineer custom solutions is that they often need years of development, integration, and testing before their new solution is available to the end user. In a past life I worked for an appliance vendor that was caught in this cycle. The solution worked well enough, however, users who opened the case were often disappointed that the componentry in use was a generation behind! Our customers had become experts in the latest processors, flash and solid state drives, etc. (as well as the latest cost of goods) and couldn’t believe their “new” solution was built with gear that was in vogue some two years earlier.

This is a really big challenge for hardware vendors. As innovation cycles shrink, much of this technology becomes obsolete even as it becomes widespread. This is where software-defined approaches to compute, storage, and networking help you stay ahead of the curve. You can adopt newer servers and storage drives at a pace you choose. Obsolete technology can be phased out without disruption. You control timing versus relying on lengthy vendor roll-out cycles and the costly, potentially disruptive changeover process.


For storage, as your data grows and requirements for new functionality emerge, a software-defined storage (SDS) approach allows you to extend your datacenter gradually, integrating new features and capacity at will. This includes the flexibility to morph and support different workload types – like server virtualization, containers, databases, big data analytics, VDI, and others, with configurations suited to the application – something that proprietary hardware solutions typically can’t do.

If you’d like to hear more about how the software-defined storage approach and the pros and cons of different architectural choices including hyperconverged and hyperscale, I will be leading a tutorial exploring these topics at the upcoming SNIA Data Storage Innovation Conference in San Mateo, CA at 10:55 am on Monday, June 13. The full agenda can be found here. It would be great to have you there! To learn more right now about the value a standardized storage server approach with Hedvig, just click the download button below.