The future of work in the heterogeneous diverse cloud

Today, we have seen the proliferation, popularization and eventual propagation of cloud computing, mobile device ubiquity and new algorithmically-enriched approaches to Artificial Intelligence (AI) and Machine Learning (ML) … all of which have further changed the nature of work .

Encoding workflows into data

The sum consequence of much of the development on the post-millennial technology curve is a new approach to digitally-driven work. To explain this shuddering generalization, digital work means tasks, processes, procedures and higher-level workflows that can be encoded into data in order for their status to be tracked, analyzed and managed.

Part of the total estate of big data that now hovers over all digital assets in the modern workplace, digital workflows can now be built that are more intelligently shared between humans and machines.

Where processes are accurately definable, typically repeatable and easily replicable, we now have the opportunity to use autonomous software controls such as Robotic Process Automation (RPA) and chatbots to shoulder part of our daily tasks. Although there is a period of process mining and process discovery that we need to perform before we can switch on the autonomous advantage, once we do so we can start to focus human skills on more creative higher-value tasks.

Where all of this gets us is to a point where we can be intelligently granular about how we place elements of our total digital workload across data services, across application resources, across cloud backbones and ultimately, across people.

A broader broad cloud backbone

To enable digital work, we still have some challenges to overcome ie we need to be able to communicate between each other as humans and machines in a consistent yet essentially decoupled way. Because not every work task has had its genome decrypted, we are still searching for ways to encapsulate certain aspects of enterprise workflows.

This is tough because we’re aiming towards a moving target ie market swings and the dynamism of global trade. But, as we start to build new work systems, we can start to operate workflows that are intelligently shared across different interconnected cloud services, for a variety of core reasons.

Enterprises can now create a layered fabric of work elements and functions shared across different Cloud Services Providers (CSPs), sometimes separated-out on the basis of different cloud contract costs, sometimes for reasons related to geographic latency or regulatory compliance, or often dispersed across more than one cloud due to the various optimization functions (processing, storage, transactional Input / Output capability, GPU accelerated etc.) that exist in different services.

If private on-premises cloud combined with public cloud is what we now understand to be the de facto ‘most sensible’ approach we know as hybrid cloud, then this (above) deployment scenario is one move wider. Where workloads are placed across clouds, we are in hybrid territory; but where individual data workflows are dispersed across and between different cloud services, we get to poly-cloud.

Visible benefits, invisible infrastructure

The architectural complexity of interconnected cloud services that are established around these terms is not hard to grasp. In order to make this type of lower substrate diversity manageable, cost-effective and above all functional, enterprises will need to embrace a platform-based approach to hyperconverged cloud infrastructure.

Most organizations struggle to effectively manage heterogeneous cloud environments and move workloads back and forth between and among them. Establishing visible benefits from this type of approach to cloud is only possible if the business is able to think of its cloud infrastructure as an invisible foundational layer.

Managing a multi-cloud and poly-cloud infrastructure means being able to simplify cloud management and operations requirements across an enterprise’s chosen estate of interconnected cloud services. With different providers all offering different software toolsets, different management dashboards, different configuration parameters and so on, there is no point-and-click solution without a hyperconverged higher platform layer in place.

Bandwidth requirement diversity

As theoretical as some of the discussion here sounds, many practical examples already exist. South Africa’s largest bank Nedbank has been bold with its cloud-based approach designed to cope with cost-effectively delivering upon its diverse bandwidth requirements.

Needing low-latency remote worker provision for its 2,000-strong developer function in India (but capable of straddling less performant latency parameters for other functions), the company had to build systems capable of superior service that would be a win-win for staff and customers alike.


Leave a Comment