Reading this article, I reflected on how the conversation has shifted from one of whether or not to migrate workloads into the cloud, to a more recent discussion of: where is the right location for the cloud to migrate your workloads to?
The author, Ellen Rubin, CEO and co-founder of ClearSky Data, provides a helpful primer on why cloud projects go over budget, and how factors such as the cloud’s geographic location, latency and availability must be considered before, and after, moving to the cloud.
I agree with Rubin’s reasoning that some types of workloads benefit from placement at an optimal cloud location and would venture to take her conclusion one step further.
For example, consider a video conferencing application. Depending on where most users are dialing in from, the location a company chooses to place its cloud-based video conferencing application can make a significant difference in both the quality of the video and the cost of connecting the users to the video conference.
I would also argue that “location” adds a new dimension to cloud in that placing workloads at the best location cannot be a manual process; it needs to be automated. This is in some ways similar to the way that placement of workloads against compute capacity occurs inside the data center, but is a more challenging problem to solve given the multi-supplier distributed-cloud environment that exists. Using orchestration tools with intelligent optimization engines, workloads will be instantiated in real time to the optimal location based on the availability of distributed cloud capacity, as well as other ‘service level’ attributes such as latency.
The cloud concept of “scale-up,” which is the domain of cloud players such as Amazon, Microsoft, and IBM needs to be married with the concept of “scale-out,” which has been the domain of the Telecom companies such as AT&T, NTT and Telefonica.
Are these two industries are on a collision course for collaboration?
Cloud and Edge Compute operating system capabilities will come together with Network capabilities such as Network Function Virtualization (NFV) and Software Defined Networking (SDN) with a layer of end-to-end orchestration on top. Very high capacity and dynamic connectivity will connect the distributed cloud to the centralized cloud.
For Example; an enterprise customer’s calendar system will contact the orchestration system to request a video conference just prior to the conference starting, with the orchestrator knowing: the video conference application SLA needs; the location of the participants; the availability of distributed cloud capacity; and the availability of high speed connectivity to all of the locations will workout the optimum location to instantiate the video conference application and then go ahead and request all of the associated cloud and network controllers to put it into service. At the end of the video conference, the reverse will occur: all the services will be turned off, and charging records are sent to the associated cloud and telecom settlement systems.
This is not science fiction! It is coming soon. What is needed to make this a reality is more open standards alignment in the areas of end-to-end orchestration, SDN and Edge Compute. Cost effective deployment of Edge Compute Nodes connected to the centralized data centers via high capacity wireless/fiber connections will catalyze this distributed cloud.
Rubin concludes, “A multi-cloud model has great potential for performance and cost optimization. That doesn’t mean you can ’set it and forget it’ though.” I take this concept a little further and say that the potential is realized with real-time automated workload location placement along with high capacity connectivity to Edge Compute nodes.