My POV: Top software-defined storage takeaways from March 2016
We saw a big uptick in interest in customer demand for software defined infrastructure. It seems the seasonality lull is over and companies are in heavy-duty 2016 planning mode already. As a result, we saw some great industry news.
For those new to this series, this is our monthly roundup of what we think are the most interesting articles in software-defined storage and cloud computing as well as my point of view on why they’re significant. The past month’s highlights include a great storage infrastructure survey from 451 Group and discussions of object storage, hyperconverged architecture, and a shift in the white box server landscape.
- Datanami: 67% of Enterprises Are Spending More on Software-Defined Infrastructure in 2016
Why we liked it: This article from 451 Research focuses on the surge in business investment in software-defined infrastructure in 2016’s first quarter. Storage is only a part of this market, which includes server virtualization and software-defined networking technologies, but will increase at almost 27% of the IT teams polled. Simon Robinson’s comments about the benefits of SDI are especially important—it provides the benefits of cloud-based IT without external cloud providers.
Hedvig’s take: This matches what we see with customers. 2015 saw moderate interest platforms like Hedvig for simple cost reduction — they had no intention of building a true SDDC. That changed when the calendar turned to 2016. It’s still early, but we’ve seen a much more concerted effort to build private and hybrid clouds using software-defined infrastructure as the foundation.
- Fortune: Google Swears its Cloud is Ready for Prime Time
Why we liked it: Public cloud is the future, but organizations still have a question about if, when, and how their data gets stored in public clouds. The article points out that less than 10% of IT is in public clouds even though AWS has been at it for 10 years. If Google can convince the biggest enterprises to switch to their public cloud, it will change the economics and provide honest competition to AWS and Microsoft Azure.
Hedvig’s take: We look forward to Google’s ongoing maturation in the public cloud space. The emergence of a legitimate third option will not only spur competition, but will give birth to cloud arbitrage and cloud brokering. Then organizations are making frequent — if not real-time — about which cloud is best suited for an individual workload. We think this will unlock a lot of innovation and new IT architectures.
- SearchDataCenter: Big-name vendors lace gloves, box back at white box servers
Why we liked it: Businesses are finally starting to understand the value of white box servers, which make it considerably less expensive and less complicated to run a data center. However, traditional storage vendors are not resting on their laurels. It will be interesting to see who the inevitable winners are, especially with the influence of cloud computing on the storage industry. Hedvig’s take: It’s very critical that organizations not conflict commodity with cheap, as we outline in an early blog. Commodity means that the underlying server (or its components) can be procured from any provider without a change in quality or performance. However, for this to be true, we need to see a plethora of options. We’re excited to see the original server vendors competing for hardware innovation. Hardware still matters, especially in a software-defined storage architecture.
- Storage Switzerland: Is Hyperconverged worth the Hype?
Why we liked it: Hyperconvergence has been a popular topic lately. Our own Eric Carter discussed the difference between hyperconverged and hyperscale architecture in an earlier blog. We agree with George that hyperconverged is not a one-size-fits-all solution. It makes sense for certain workloads and certain operating conditions (remote offices, limited staff, etc).
Hedvig’s take: Hyperconverged is an answer, it’s not the answer. The power of software-defined infrastructure is that each application can have its own unique underlying infrastructure, tailored specifically to that workload’s needs. Hyperconverged is valuable, but often times it forces the application to a specific architecture — the antithesis of the SDDC movement.
- The Register: Secondary storage, the missed opportunity for object storage
What we liked: All parts of the data center are being disrupted and none stand to be modernized more than secondary storage. Object storage can still be a pivotal technology in secondary storage, but as Enrico Signoretti writes, today’s customers need a suite of products for their diverse and often changing storage needs. This article provides a useful discussion about how single-purpose object stores are losing value.
Hedvig’s take: We obviously agree with Enrico. The Hedvig Distributed Storage Platform treats object as an access interface, precisely as he describes in this article. We see customers often want to tackle secondary storage first as it’s lower risk and the benefits are easier to quantify. However, they don’t want to be stuck with yet another island of storage — object in this case. Soon they look to add VMware, database, and enterprise apps. Pure object storage solutions don’t accommodate this.
What storage and cloud stories were you reading in March? Leave us a comment below or click the tweet us button below to share your favorite article on Twitter.