top of page

We Analysed 30,000 Hours of 'Waste.' Here's What We Got Wrong. (Spoiler: It was Failure Demand)

  • Writer: Sam Lawford
    Sam Lawford
  • Nov 27
  • 3 min read
"Insurance claims operational excellence: identifying and 
eliminating failure demand through systems thinking and 
root cause analysis"

When I started analysing touch time data for a major UK insurer's claims operation, I thought I knew what I'd find.


Inefficient processes. Slow systems. Undertrained staff. The usual suspects.


However, I discovered we'd been looking at the problem entirely wrong.


And that misunderstanding was costing the business 30,000 hours annually.


The Original Hypothesis (That Turned Out to Be Wrong)


Like most operational excellence projects, we started with a simple question:


"How can we make claims processing more efficient?"


The conventional approach would be:


  • Time-motion studies

  • Process optimisation

  • Technology upgrades

  • Automation

  • Performance management


We mapped every step. We timed every task. And then we noticed something odd.


Handlers were spending 30% of their time on work that added zero value to customers.


The Breakthrough: It Wasn't Waste. It Was Failure Demand.


Here's what we discovered:


30% of the time our claims handlers were spending only happened because something earlier in the process had failed.


Let me give you a real example:


Case Study: The Phantom Touch


We tracked a simple claim. One which should be straightforward, but the claim required 7 touches before settlement.


Touch 1: Initial notification (value-add)

Touch 2: Missing Images - handler calls customer back

Touch 3: Unclear liability - handler reviews case again

Touch 4: Customer calls asking for update

Touch 5: Vehicle valued (value-add)

Touch 6: Customer calls again (to chase payment)

Touch 7: Final settlement (value-add)


Out of 7 touches, only 3 added value.


The impact? 30,000 hours of annual waste. This wasn't about working faster. It was about eliminating work that shouldn't exist.


The Lesson: Question the Work Before You Optimise It


If I could go back and give my past self one piece of advice at the start of this project:


"Before you ask how to do the work faster, ask why the work exists."


Most operational waste isn't visible in time-motion studies.


It's hidden in:

  • Work created by earlier failures

  • Reactive processes treating symptoms

  • Systems optimising components, not the whole


30% of our 'waste' wasn't inefficiency.

It was failure demand we'd been tolerating for years.


The Uncomfortable Truth


The hardest part of this project wasn't the analysis.


It was getting people to accept that most of the work they'd been doing for years shouldn't have been necessary.


That's uncomfortable.


It feels like blame. (It's not.)


It feels like saying people were doing useless work. (They weren't - they were compensating for system failures.)


But until you face that reality, you're stuck optimising symptoms instead of fixing causes.


The Bigger Implication


If 30% of work in one operation was failure demand, how much exists in yours?


My guess: More than you think.


Because failure demand is invisible until you look for it.


It disguises itself as "necessary work" and "how things are done."


It hides in:

  • Call-back lists that shouldn't exist

  • Follow-up tasks that compensate for incomplete initial work

  • Checking work that should have been right first time

  • Reactive customer service fixing problems you created


You can't optimise your way out of failure demand.


You can only eliminate it by fixing what creates it.


How to Find Failure Demand in Your Operations


Step 1: Map Your Touch Patterns Track any work item (claim, ticket, order):

  • How many touches does it require?

  • What triggers each touch?

  • Which touches add customer value?


Red flag: Items requiring 5+ touches that should be straightforward


Step 2: Ask "Why Does This Touch Exist?" For each non-value touch:

  • What failure created this work?

  • What should have happened earlier?

  • How do we prevent it at the source?


Red flag: Answers like "That's just how we do it"


Step 3: Categorize Root Causes Group failure demand by type:


  • Information failures (missing/unclear data)

  • Process failures (ambiguous steps)

  • Communication failures (reactive service)

  • System failures (poor design) Red flag: Same root cause across multiple failures


Step 4: Fix the System, Not the Symptom Don't optimize the failure work. Eliminate it:


  • Standardize information capture

  • Clarify process steps

  • Enable proactive communication

  • Redesign broken systems


Red flag: Solutions that make work 20% faster vs. 100% eliminated


Want to Find Your Failure Demand?


This project took 6 weeks and identified 30,000 hours of annual savings through systematic identification and elimination of failure demand.


The methodology:

  • Value demand vs. failure demand classification

  • Root cause analysis (not symptom treatment)

  • Systems thinking (optimise the whole, not the parts)

  • PDCA testing (prove before scaling)


Common signs of failure demand:

  • Work requiring 5+ touches that should be simple

  • High volume of customer status calls

  • Rework and call-backs

  • "That's just how we do it" processes


Sound familiar? Let's talk.



Comments


bottom of page