Written by: Rob Tribe, VP, System Engineering EMEA at Nutanix
Throughout the rapid evolution of cloud computing from its earliest stages, we have witnessed the development and extension of many different cloud service specialisms, applications, and optimizations. However diverse and complex the cloud becomes, we can classify its core DNA into two strands i.e. public and private.
With public cloud services coming from data centers and offering maximum flexibility, breadth, and scope, private or on-premises cloud sits alongside the public cloud in a sort of yin-yang balance to provide control, privacy, and compliance where needed. Businesses quickly realized that a hybrid combination of the two strands was the most prudent workable approach.
What else do we need to know about how hybrid multi-cloud happened and how this technology should be most productively implemented today?
Among the core validation points that exist for hybrid multi-cloud is the need to locate certain workloads in specific geographic locations.
This can be due to latency requirements if a particular application’s functionality depends upon it working to a precise number of microseconds, or it can be to fulfill upon regulatory compliance rulings and legislation. Potentially, it can be a result of both.
CapEx to OpEx in Public Cloud
For applications that use a lot of compute resources but on a highly variable basis and only for a short burst of time, on-premises private cloud represents a disproportionate Capital Expenditure (CapEx) outlay with the risk of purchased resources lying idle and unused.
A good example here might be quarterly or annual tax processing; the workload is high and heavy, but essentially intermittent on a comparatively date-specific basis. Running this type of workload in the public cloud enables us to shoulder a cost that specifically tracks the consumption of resources, which is logically an Operational Expenditure (OpEx) weighted use case best suited to the public cloud.
Straddling the Colocation Intersection Point
Looking at the middle ground and looking for the deployment sweet spot, we need to think about what happens if we start a new business from scratch with modest capital investment and limited physical equipment or resources.
In this scenario, it of course makes sense to use as many suitable public cloud services as possible in the first instance; they require little or no pre-procurement expenditure and offer the maximum breadth to scale if and when the business flourishes, and grows.
Hybrid Multicloud Reality
If a company has progressed to this point but then opens a new office in a new territory or country, it may very reasonably look to adopt a greater weight of public cloud in its new location.
If the business finds that it gets a better deal (commercially, support-related, or platform-related) in one country from one particular CSP, then it may switch some workloads to Microsoft Azure, push some workloads to Google Cloud Platform, others to AWS, and still others to any of the smaller tier players in this market. This is a hybrid multi-cloud reality.
A New De Facto Standard
If we have learned anything at this point, it is perhaps just how far hybrid multi-cloud has become a kind of de facto industry standard for prudent, strategically tactical cloud implementations. It offers the greatest scope for deployment flexibility, functional dexterity, and cost optimization.
Skillsets will need to be tuned, bolstered, augmented, and extended in line with the specific actions of cloud architects, Site Reliability Engineers (SREs), and the now cloud-native DevOps developer+operations teams that will exist in this space.
Where once we had cloud, we now have cloud multiplicity, connectivity, and an occasional instance of exclusivity. It’s a small but hybrid multi-cloud world after all.