5 ways storage is changing for virtualization
What’s the latest in virtualization? Well, turns out it’s storage. 2015 has been a huge period of churn in the industry. Traditional storage is losing ground to new ways of deploying, provisioning, and managing storage. But why?
It’s really all about it efficiency. You’re looking for faster, and cheaper ways to store all of the vast amounts of data you need to keep – from all of your apps, all the internet’ing of things, mobile devices, and so on. You want storage to be easier and more cost-effective, and you’re adopting solutions that eliminate the barriers and limitations of existing solutions. BTW – this is exactly why Hedvig and so many of other startups in this space now exist.
So what’s going on with storage in the virtual server space? For one thing there is a move to achieve a “true” cloud like infrastructure – not just highly virtualized – but much more automated. You want on-demand provisioning – even for complex IT tasks. It’s why there’s been so much interest and appetite in 2015 for cloud orchestration tools like OpenStack, and Mesosphere, and VMware’s vRealize orchestrator.
The way we have done IT for the last decade will not be how we do IT going forward. It is now much more about the fast beating the slow – versus the big beating the small. If you can truly deliver an agile, cloud-like infrastructure, it will make a big difference.
5 storage changes that give you a leg up
- Software-defined storage (SDS): More and more companies are moving away from traditional monolithic arrays to a software-based approach where the capabilities and value of enterprise storage are purchased and delivered via software as opposed to locked up in a custom hardware solution. Flexibility is paramount. Today’s software-defined solutions give you the ability to respond more effectively to business demands. It’s fully programmable, and provisioning is much more cloud-like, taking minutes to achieve versus hours, days or weeks. It also provides the ability to define virtual volumes with features and functions – like VMware VVOLS – assigning capabilities that uniquely fit a given application or service.
- Commodity hardware: Taking a page out of the playbook of the Googles, Amazons, and Facebooks, many of you are choosing to deploy off-the-shelf systems versus custom hardware. Today it delivers lower cost without sacrificing quality. Now, with SDS in your virtualization environment, you can ride Moore’s law and take advantage of the frequent increases in processing power, drive capacity, etc. while maintaining or even dropping your costs over time.
- Distributed systems: Storage utilizing software and commodity hardware also benefits from the cloud-company playbook to achieve a new level of resilience and performance. It takes advantage of the power of many systems working together to deliver horsepower and capacity. Data is replicated across many systems such that if a drive or a node fails, the system continues working. It self heals, replicating and repairing data on alternate nodes in the cluster. Because data is spread across nodes – and even across sites – moving VMs is also much easier and effective. Data is served to VMs from the fastest available source, and so as VMs move, you can maintain locality and performance.
- Flash & hybrid SSD/HDD: Flash has taken off in the storage market as is affords greater performance with greater cost efficiency. A little flash goes a long way. With SSDs you can deliver consistent IOPs at a lower price point – 100x the performance of a standard hard drive. Thanks to flash, you can dial down on the number of spindles and the speed of HDDs needed in the system. Flash is being used in a number of ways including as a place of residence (think all-flash array) and as a caching tier at both the storage level as well as the application host. Hot data can be maintained and delivered automatically from flash giving you blazing fast performance without having to ever hop across a LAN or WAN.
- Hybrid cloud: In 2015 we’ve started to see more private datacenters working with public compute and storage resources. A lot of what we see is about going hybrid for capacity and/or for a sandbox development environment. This means you may use the cloud as temporary capacity – bursting into the cloud when you need it for that extra workload at certain times of the year. Or, you may develop and test your VM-based application in the public cloud and migrate it to production in your private cloud. You may even begin your entire IT operation first in the public cloud and then move to private. The storage capabilities outlined here – software-defined, distributed, etc. – are making it possible for public and private data to work together much more seamlessly than in the past.
Get it for less
What we see working with customers is that a software-defined, commodity-based approach leveraging all of the things above can yield significant savings – as high as 60% or more. How?
- Purchase cost: Today’s storage software + commodity servers frankly costs less than traditional arrays.
- Scaling: You can scale as needed in smaller increments and use components like high-capacity drives to yield a lower-cost-per-GB.
- Management: Provisioning and managing storage can be easier and faster. The manpower needed to manage a well-built distributed system is less than a traditional approach.
- Migrations: Say goodbye to time consuming data and array migrations. Imagine the ability to remove and add nodes over time for an evergreen infrastructure with data being auto-distributed and auto-balanced.
- Uptime: With a software-based distributed system, the system self-heals and delivers data from surviving nodes, yielding less downtime.
If any of this sounds good to you – I invite you to investigate more about Hedvig. Hedvig is software-defined. It is distributed. It is taking advantage of all of the above storage capabilities to deliver a more effective storage solution for virtualization no matter the hypervisor(s) you use (VMware vSphere, Microsoft Hyper-V, KVM, Citrix Xenserver, etc.).