ARM servers help make software-defined storage ready for the cloud era

By Rob Whiteley | | Cloud

Let me guess. You clicked on this blog because you’re thinking: “Why would Hedvig, as a software-defined storage provider, care about ARM?”

My first answer: Good, the title worked!

My second answer: Because ARM servers are a critical innovation in making sure enterprise clouds actually happen.

Cloud environments — at least the real ones, like those in Amazon, Facebook, and Google — use distributed systems as their foundation. And as a leading distributed storage company, Hedvig definitely has a vested interest in that outcome.

To elaborate on that point, let’s first discuss why ARM servers are gaining popularity.


Many of us in the industry already know that servers using 64-bit ARM processors for years have been a bit of a science experiment in the data center. It’s no secret they historically have consumed less power than their x86 rivals, making their use in mobile phones, tablets, and connected devices widespread. But they haven’t really taken off in enterprise IT.

Yet based on some supply-side trends and customer interest, we’ve noticed an increasing number of 64-bit ARM servers making their way to the market. Consider just these few examples from the last year or so. New ARM chips from Cavium and Marvell as well as from Qualcomm have recently been unveiled. Qualcomm has, in fact, been among the most aggressive in pushing ARM-architecture processors as alternatives to x86. A lot of this interest is spurred by enterprise cloud envy. Meaning, enterprise IT wants to mirror the success of hyperscale datacenters. ARM-based servers are an interesting piece of the power efficiency puzzle that hyperscalers have mastered.

(One quick aside: While it may seem comical, we’re not talking about Raspberry Pi here. Sometimes when we’re talking to analysts and reporters we note that Hedvig supports 64-bit ARM-based servers, so we get the question: “Can I run this on Raspberry Pi?” The answer is no. Hedvig is data center-class software.)


Make no mistake, x86 (read Intel) processors are very much the king of the hill in data centers across the world and will remain so for a good long time to come (Intel’s share of the server market has long been more than 90%). But with the explosion in the amount of data being produced, captured and mined for business insight, the inevitability of cloud adoption across nearly every vertical, and the digitization of just about everything, the timing is right for hyperscale architectures. The question remains: Can ARM servers make inroads into data centers?

Among the most often-cited reasons for deploying ARM chips, there are four relevant to data center and cloud environments:

  • Price/Performance. ARM has significantly better performance-per-watt. For a particular server, this might not be that important. Stitch together thousands of servers and it’s a whole different story. ARM favors hyperscaling where you need a lot of servers to run distributed applications.
  • Data center workloads are rapidly changing and evolving. Large, monolithic applications are giving way to scale-out, distributed applications. Just look at the rise of Docker and containers. These workloads are powered by Linux, further decreasing the need for x86 that blossomed under the rise of Windows Server.
  • Ability to source processors from more multiple vendors. Intel can and should succeed in the data center. But innovation is never a bad thing. ARM will push new thinking and new architectures. And, along the way, provide people with negotiating leverage so they’re not single-sourced. ARM will ironically improve “Intel economics.”
  • ARM chips are suitable for infrastructure applications. ARM is a great resource to run Linux-powered, software-defined infrastructure. It’s already found in custom server, storage, and networking appliances. The rise of commodity ARM servers will simply provide software-defined alternatives to these appliances.

As I alluded to above, Hedvig’s interest in ARM servers isn’t purely academic. The reason is tied to our focus on distributed systems and it also harkens back to the vision our CEO, Avinash Lakshman, developed during his time at Facebook and Amazon. Avinash co-invented Amazon Dynamo and later invented Cassandra at Facebook. In both instances, he built a new approach to storing and managing data for two of the largest and most successful web companies. He’s a bit modest, so let me brag for him. You can think of him as a pioneering hyperscaler.

At Hedvig, we’re applying Avinash’s experience creating these hyperscale systems to our mission of transforming traditional data centers into clouds. It’s in precisely this type of environment where ARM processors can provide an intriguing alternative to x86 chips.

In fact, data centers at the largest hyperscale companies such as Amazon, Facebook and Google are no longer the energy hogs they once were. While the number of data centers has surged in recent years to enable our always-on lifestyles, the energy needed to support that growth has seen little changed.

Some of the biggest internet companies have in the last few years focused intently on reducing their power consumption, in part by designing their new data centers to use outside air for cooling rather than power-hungry AC systems to cool the air inside the center.

There are other ways, too, to increase the efficiency and performance of data centers. For example, what if we could run lower-power ARM-based servers in a smaller footprint using less energy, and then pair that with all-flash technology, which is similarly smaller and lower-power than spinning disks. Wouldn’t these changes in and of themselves revolutionize the way data centers are built and operated?

In thinking about the rise of distributed systems (and of course Hedvig is a leading proponent of and advocate for them) and employing a true scale-out architecture, it makes far more sense to run that architecture by parallelizing the efforts across as many nodes as possible. The heart of distributed computing is maximizing aggregate horsepower by distributing workloads across many smaller nodes.

Also, as we start to think differently about the evolving application needs in today’s hyperconnected and always-on world, the solution is less about taking very powerful servers and creating VMs on those machines and more about running a scale-out, microservices architecture. Technologies such as containers will help spur this evolution. As Docker and Dockerized applications continue to gain steam in the enterprise, system architects and DevOps will think about how to tailor software to the low-power, scale-out potential ARM can deliver.


So what’s going to happen in the near-term future with ARM-based servers? Expect:

  • By the end of this year, to see the first software-defined storage product deployed on ARM servers. To date we’ve seen some interesting products that run on ARM “systems on a chip” (SoC) where the process is part of the hard drive. But we haven’t seen a system run on datacenter-class ARM servers like the HPE ProLiant m400.
  • By 2018 to see horizontal partnerships among hardware vendors, distributed-computing software vendors and resellers offering integrated solutions based on ARM processors. Stay tuned as I’m sure there will be exciting CloudScale partners bringing ARM innovations to market!
  • By 2020 to see ARM servers creep up to the mid-single-digit share of the total server market, up from basically zilch today. This will be fueled by enterprises seeking to adopt a hyperscale mentality.
  • The ongoing innovation in ARM servers on the hardware side coupled with the maturity on the distributed software side bode very well for the future of enterprise data centers. The lines between private and public clouds are blurring.