Why software-defined storage makes Bimodal IT an acceptable reality
The topic of Bimodal IT sparked a small kerfuffle when Gartner first started talking about it in 2014. At the time, industry analysts, journalists and others downplayed the idea, calling it nonsense, a Band-Aid approach to bolting on innovation, or worse. Many noted that “Bimodal IT” was just a fancy-sounding term for how IT learns to incorporate disruptive technologies.
A certain amount of healthy skepticism is usually a good thing, and we at Hedvig embrace critical thinking. Yet call it what you want — Bimodal IT, brownfield versus greenfield, legacy versus disruptive, or whatever — it’s a very real organizational and architectural structure we see customers using. They don’t necessarily call it Bimodal IT; however, we’ve seen the concept gaining popularity among our customers for some time now (see our earlier blog touching on the topic here).
The questions you need to ask: Why now? Is Bimodal IT just retreading an old, tired IT concept? And what technologies can help? Let’s discuss each.
What is Bimodal IT?
For those who aren’t familiar with the term Bimodal IT, think of Mode 1 as traditional IT and Mode 2 as agile IT. The figure below recaps the basic concepts and you can see Gartner’s formal definition here.
A customer example of Bimodal IT
We don’t think Bimodal IT is just a retread. There are new technology and business pressures — especially undergoing digital transformation and moving en masse to the cloud, be it private, public or hybrid in nature — driving enterprises to run their IT organizations at two different speeds.
Take, for example, a very large financial services company working with Hedvig. Ultimately they concluded that in order to build the cloud-like infrastructure they needed to lower costs and increase business responsiveness, they would need to embrace Bimodal IT in their data centers. The mandate from their CTO: build a private cloud based on a truly software-defined data center. That meant the entire architecture had to be procured as software-divorced from its underlying hardware, programmable via standard APIs, and with elastic scaling properties (ability to scale out and in). They chose the Hedvig Distributed Storage Platform as their underlying storage infrastructure for these very reasons.
The organization built this as an entirely parallel datacenter infrastructure, recognizing that it was otherwise too hard to change the wheels on the bus when the bus is going down the freeway at 65 miles per hour. A more traditional approach wouldn’t meet the business’ timeline, and failure to do so would just generate more “shadow IT” as the business invested in workarounds.
They started with a small, four datacenter deployment run by a dedicated tiger team. Their goal was to prove the value of the private cloud and test it with some emerging business apps. Some of their Mode 2 applications were immediately ported to the new architecture (Hadoop is a good example). Based on the success of this “pilot,” the organization is now rolling out their global private cloud based on this blueprint. Most apps will be migrated over time, but they foresee some will remain on the older infrastructure, running in racks side-by-side with the new infrastructure. This customer will pursue the Bimodal IT approach until everything is integrated and, most importantly, they can train IT on the necessary skills to maintain the Mode 1 infrastructure.
All said and done, is this unique? Probably not. Is Bimodal IT the best way to describe their approach? Yes. They built a parallel SDDC infrastructure.
New technologies make Bimodal IT an operational reality
The controversy of Bimodal IT isn’t whether it exists or not. All large enterprises have “two speeds,” with older, legacy technology sitting beside newer, more innovative technology. The controversy is whether this approach risks bolting innovation onto IT, rather than integrating innovation into IT. Some organizations may decide to run two parallel infrastructures in perpetuity — just ask any organization still using a mainframe. But new technologies are surfacing to enable IT to cap its investment in Mode 1 legacy IT and divert energy and money into Mode 2.
Four such technologies are:
- Containers. Containers make the most sense for enabling a developer to write code, build an application in a container, test it all from his or her laptop, and then easily ship that container into production. This promise of portability is attracting traditional infrastructure vendors to embrace Containers. Companies like ContainerX are now joined by incumbents like VMware and Microsoft to eagerly support containers “natively” with vSphere Integrated Containers and Windows Server Containers, respectively. This promises a better operational experience where companies can develop Mode 2 natively while porting Mode 1 applications to the same infrastructure.
- Cloud management platforms. Many skeptics will define cloud as nothing more than self-service and automation tools layered atop virtualization. Perhaps, but those elements are where the critical “cloud-like” capabilities reside. Cloud management platforms (CMP) provide the agile services orientation Mode 2 requires, coupled with the ability to manage Mode 1 infrastructure (either natively or through orchestration APIs). BMC, HPE, Microsoft, RightScale, Red Hat, Scalr, and VMware (and I would argue OpenStack, even though Gartner disagrees) are all good examples.
- DevOps tools. Orchestration tools like Kubernetes, Swarm, Fleet, and Mesos help companies scale services up and down. Configuration management tools like Ansible, Chef, and Puppet help deploy infrastructure throughout datacenter lifecycle stages. These tools have made Mode 2 environments significantly easier for organizations to handle in-house, rendering Mode 2 far less “bolted on.” IT does not need to dedicate specific individuals to manage the separate environments given the time savings these tools afford. In fact, often your developers can just do it. After all, that’s the point, right?
- Software-defined infrastructure. The last and arguably (albeit biased) piece is the infrastructure itself. Software-defined infrastructure provides many benefits, not the least of which is that it’s “code.” As software it can be manipulated, programmed, and reused. Making your infrastructure software-defined gives it the agility to operate in Mode 2, further enhanced by many of the technologies listed above. However, it can also be deployed as “traditional” infrastructure and accommodate those pesky Mode 1 applications by changing the necessary policy and underlying commodity infrastructure specifications. The ability to do this in a single, software-defined platform decreases operational expenses and eases migration.
The role of storage and Hedvig in Bimodal IT
Stored data is usually the second most challenging element (behind changing IT behavior) to migrate from Mode 1 to Mode 2. A large enterprise will likely have petabytes of data attached to and feeding its Mode 1 applications. Thankfully, Hedvig is a platform that, via policy and commodity server selection, can be configured for either Mode 1 or Mode 2. Let the application dictate its storage resources while providing a single, operational experience for managing the underlying storage. Our goal is to give IT the power to migrate applications and data as and when it makes sense.
If the ongoing debate over Bimodal IT tells us anything, it’s that IT teams are finally thinking more like cloud providers in how they use technologies to accelerate their businesses. We’ve seen an emergence of container, cloud management platforms, DevOps tools, and software-defined infrastructures that enable IT to bridge Mode 1 and Mode 2. Bimodal IT is a reality. The question is: Are you investing in those tools to make it work?
To learn more about how Hedvig integrates with technologies that ease Bimodal IT transitions click below to watch a 4-minute video. We demonstrate Hedvig software-defined storage, Docker Datacenter (containers, cloud management, and DevOps tools), and MongoDB.