Predictive Analytics for Service Resilience

Predictive Analytics for Service Resilience

Pretty good predictions for really good service management

A recent study in the UK by the British Banking Association revealed that 19M people in the UK regularly logged onto a mobile banking app and that mobile banking apps are used almost exclusively by nearly 70% of millennials for their financial transactions. The direction of travel is similarly clear across the majority of industry and public-sector organisations.  The big question for these organisations is how resilient are their services in the face of continuing change and changing demand.

The deployment of tooling, instrumentation and logs has become more pervasive and more granular throughout the infrastructure and application components of these business services. With this as a backdrop we’re at a point where the volume and quality of the data coupled with the maturity of service teams is bringing the deployment of predictive analytic methods for service management into mainstream adoption.

This creates the opportunity to look at service resilience in a new light with better engagement across the organisation, with regulators and ultimately with users of the service, your customers.

Service management has focussed for a long time (quite understandably) on what’s happening now or what just happened and then responding through well understood processes. As user volumes grow and revenues become directly rather than indirectly tied to service delivery, service management becomes ever more critical in terms of business revenues, business reputation and regulatory compliance.

What if we can shift attention to an understanding of what is likely to happen, when it’s likely to happen and understand what actions can be taken calmly and considerately to avoid future service degradation or service outage?

My colleague Scott Russell wrote a nice piece recently on correlation and causation, basically pointing out the importance of having people as part of your model. Collaboration between controllers of the data (architects, service managers…), data science teams and cloud based analytics platforms make for an ideal combination of people domain skills and enabling tech to come up with a new paradigm for quantifying service resilience.

The challenge with achieving this goal is that existing analytics platforms don’t support this teamwork.  They focus too much on the nature of the analysis and too little on the problem domain.  The data scientists who understand the data and analysis get little support in formulating them in the context of service management, and the service managers who understand the context find those platforms  too difficult to create the models they want.  That’s a problem and it’s a missed opportunity, an opportunity that has the potential to change the paradigm for how service management moves from being reactive to proactive.

The barriers to this are no longer about the quality, quantity and completeness of the data, but creating the environment where service management teams who understand the service, the architectural components and the data flows are able to come up with predictive analytical models that work day in and day out.

At Sumerian, we’re trying to do just that. Service teams can come into a secure, cloud hosted SaaS analytics environment with their infrastructure and application data and begin to work with their data bringing with them the full context of their operational experience.

 

As the diagram above highlights, service managers can attach their data to a pre-built library of analytical models that quickly baseline and surface the evidence of demand and constraints across their services and service architectures. Based on the size and duration of the datasets, service managers can run predictive models of what’s going to happen a day, week, month ahead of today and have quantifiable confidence levels in those predictions. Wouldn’t it be good to have a business conversation that says we can run our mobile banking service for 6 more months based on growth and demand actuals/forecasts before users start to see a degradation and if we target investment in this piece of the architecture we can extend that further and validate those outcomes with data.

Taking this a stage further, we can use the newly created datasets (service architecture data plus time series analysis data) and run prescriptive analytics based on a series of demand, risk, investment and cost variables and optimise for those outcomes.

Putting this capability into the hands of the people that run the service, understand the data, the change window and the interdependencies would move the service risk and service resilience agenda forward considerably.

The economic cycle and the changing nature of competition means that throwing money or people at your services isn’t an option. Fortunately, the quality and volume of data created every day provides an answer to those resilience, costs and investment challenges, we just need to work with the evidence buried within those data flows, something Sumerian can help with.

By | 2017-11-21T14:32:23+00:00 November 21st, 2017|