4 storage strategies to help you think differently about hybrid cloud

By Rob Whiteley | | Cloud

To go hybrid or not to go hybrid, that used to be the question. Today it’s simply a reality. The enterprise has gone hybrid. Organizations of all sizes mix IT resources running on-premises with a growing percentage running in public clouds. You’ll be hard pressed to find organizations – even cloud native ones! – that live at either extreme.

But now there are new and interesting hybrid cloud models emerging. We’ll discuss four of these models and how software-defined storage ensures that data becomes an enabler, and not a roadblock, to hybrid cloud.

Why hybrid?

Today’s IT and developer communities often find themselves between a rock and a hard place when assessing ways to quickly bring new services to market while leveraging innovations in mobile, big data and SaaS technologies. At the same time, they must connect to “legacy” environments that run the core of the business. This dual need is often best served by the hybrid cloud—newer technologies can be increasingly supported in the cloud while connecting and extending legacy applications that reside in on-premises datacenters.

But there’s an often overlooked byproduct of a hybrid strategy: islands of storage, where data and apps are “hard coded” to particular storage options. Data is arguably the most important business asset today, and IT has to keep finding storage technologies that meet various business and app requirements. That means traditional arrays, hyperconverged appliances, and hyperscale object solutions exist in silos across on-premises and public clouds, with no solution spanning both.

Silos of storage = major hybrid cloud bottleneck

With traditional storage models, provisioning times can take days, weeks or even months in some cases, completely negating the agility and flexibility granted by hybrid cloud architectures.

This is where software-defined storage comes into play. At its peak, a true, distributed software-defined storage (SDS) solution can not only unlock new cloud architectures and allow IT to think differently about cloud storage, but it can also scale with the business and consolidate islands of storage. With SDS, IT can run storage anywhere precisely because it is software, allowing ops teams to build it into hybrid cloud architecture rather than bolting it on later.

We recently covered this topic in a BrightTalk webinar where we discussed four strategies for using SDS to help take advantage of the hybrid cloud. In all of the cases, outlined below, a software-defined storage architecture allows these strategies to work by providing a flexible, elastic and scalable storage backend that works with a variety of cloud and on-premises solutions.

Strategy #1: Build an OpenStack “landing pad” to onboard cloud applications

Onboard infrastructure and applications running in public clouds (like AWS) and import them into an OpenStack environment.

While this strategy is the reverse of working with traditional clouds, since you are taking apps from the public cloud and moving them into private clouds, it does have several notable benefits. Not only is the onboarding of applications easier, acquisition time is also reduced when you move from public to private. In addition, using an OpenStack landing pad allows IT to bring applications into a private cloud environment with no noticeable changes to the end user.

What to look for in your SDS solution to support this hybrid cloud strategy:

  • OpenStack compatibility – particularly Cinder & Swift
  • S3 compatibility for native AWS apps
  • Thin provisioning for instant capacity for on-boarded apps / infrastructure
  • Snaps and clones for ability to test and migrate
Strategy #2: Create hybrid AWS-based apps that migrates older, colder data from private to public clouds (and back)

Optimize your cloud economics and free up data center capacity by creating hybrid, AWS-based applications that can migrate data in and out of public cloud. In this strategy, applications can run in EC2 while reads and writes happen in on-premises data centers, shuttling hot data out of costly cloud storage and into an on-premises data center.

Over time, the on-premises datacenter can be programmed to push the colder data to less expensive cloud storage like S3 or even Glacier, while the hotter data remains easily accessible in the datacenter, reducing latency. This hybrid strategy reduces costs impacting the bottom line and ultimately provides a more predictable cost structure.

What to look for in your SDS solution to support this hybrid cloud strategy:

  • S3 compatibility for native AWS apps
  • Auto-tiering and balancing to migrate data
  • Ability to run instances in AWS / EC2 if performance is required
  • Multi-site replication to recreate availability zones
Strategy #3: Treat public clouds as DR sites where apps are automatically protected across data centers and clouds

Run your applications on-premises and send copies of your data to the cloud, treating it as a data repository. Treating public clouds as DR sites can help you avoid additional datacenter build-outs and provides multi-zone availability to your business.

The value here is to avoid data loss if a datacenter goes down. Thanks to the built-in replication capabilities in SDS, your applications won’t see a blip even if an entire data center is offline. Think of it as getting N+1 protection where the cloud is that +1 that ensures you have a safe copy even if something affects all of your data centers. Using this hybrid cloud strategy actually allows for better DR than a standard synchronous replication environment — with DR intelligence is built into the software, removing complexity and extra cost.

What to look for in your SDS solution to support this hybrid cloud strategy:

  • Ability to run SDS instances on public compute clouds
  • Multi-datacenter support for strong-consistency writes to multiple DCs/clouds
  • Tunable replication factor of 3 or more
  • Per-app or per-VM storage policies
Strategy #4: Store your data on private, hosted infrastructure and cloudburst to the public cloud for compute

Cloudbursting is a great way to take advantage of public cloud while still keeping most of your data on-premises. Often, public cloud sites are co-located with private hosted infrastructure. By building a cloud in the same location as a large public cloud provider, users can take advantage of the economics of public cloud for stateless compute.

The physical proximity ensures low-latency transit between clouds, allowing users to keep data safe and compliant in private clouds while still reaping the benefits of public cloud compute.

What to look for in your SDS solution to support this hybrid cloud strategy:

  • Support for multiple public clouds, including AWS, Microsoft Azure, and Google Cloud Platform
  • Ability for SDS to run in “hyperscale” where storage is decoupled from compute
  • Functionality to provide client- or server-side caching for local I/O in the EC2 instances
  • Multi-hypervisor, container, and OS support for apps running in EC2
Use software-defined storage to power your modern cloud

Cloud storage does not just mean “putting data in the public cloud” anymore. With software-defined storage, a number of hybrid options, including the four outlined above, are now achievable. In many ways, software-defined storage is powering the modern cloud.

However, buyers who are interested in exploring SDS must choose carefully. Not all SDS solutions support hybrid cloud models. Before you buy, make sure that your SDS solution supports newer capabilities like OS diversity, granular storage policies and the ability to run across private and public cloud infrastructures. Otherwise, you might find yourself dealing with the very same issues that you experienced with traditional storage.

If you’d like to go even more in-depth on this topic, click below to watch our complete BrightTalk webinar. Or, if you’d like more information about how Hedvig can help with your hybrid cloud storage needs, just click here to get started.

Watch On-Demand