“All my tubes and wires
And careful notes
And antiquated notions…”
-Thomas Dolby, “She Blinded Me With Science”
Over the years I have noticed that successful improvement projects tend to share some key traits. Projects that employ an evidence-based methodology fare far better than those that do not. Some of these projects self-identify as “lean” and some do not, but are rooted in the same fundamental concepts. PDCA, DMAIC/DMADV, Requirements-Design-Development-Testing-Maintenance (Agile/Scrum), and Empathize-Define-Ideate-Prototype-Test (the “flexible flyer” of recent PDCA spins?) are all variations on the same scientific method theme.
In my experience, projects based on the scientific method are more likely to succeed than projects focused primarily on Information Technology (IT) improvements. I am grateful to the smart folks at Gartner for branding their technology-market analysis work “Magic Quadrants,” because when we rely too heavily on market-leading software solutions to solve our bigger problems, we tend to abandon strategy-driven problem solving disciplines, relying instead upon technology-focused, solution-presuming, pseudo-scientific IT magic. I’ll take Hoshin over Houdini any day.
Culturally, perhaps the most fundamental impediment to achieving the ever-elusive results-based accountability ideal is widespread belief in IT magic or, put another way, the lack of strategic, evidence-based rigor in tech-driven problem solving practices. Kent Toyama, author of “Geek Heresy: Rescuing Social Change from the Cult of Technology,” makes a compelling case in an MIT Tata Center interview, as does lean IT guru Steve Bell in this YouTube Video.
I suggest there are two main reasons why technology-focused initiatives so often over-promise and ultimately under-deliver. For each reason I offer a remedy, rationale and real-life example.
Reason 1: Underestimating Cost and Complexity.
We love fixing big problems with big, comprehensive, out-with-the-old-in-with-the new solutions. But while the efficiencies to be realized through an IT solution may seem self-evident, implementation invariably proves far more costly and complicated than estimated.
REMEDY
Pursue cheaper, simpler non-IT opportunities for improvement FIRST and save the big tech for later.
RATIONALE
- IT problems are never the sole cause of a performance gap or broken process and rarely are the primary cause.
- The bigger the IT solution, the more dubious the ROI (see: most enterprise application implementations since the dawn of time).
- Applying a technology solution to a broken process without first pursuing non-technology opportunities for improvement virtually ensures that:
- Process problems persist, in spite of shiny new technology;
- Technology solution implementation will take longer and cost more, because customer value, underlying processes and real business needs are not adequately understood.
- Without adequate current state process analysis, external consultants (or well-meaning internal IT professionals) end up making faulty assumptions to fill in critical gaps of understanding. Subsequent project time, cost and quality issues can often be traced back to those uninformed assumptions.
REAL-LIFE EXAMPLE
Our client, a health insurer, had a problem with their new group enrollment process lead time of 28 days, compared to industry average under 10. Our first day onsite, the COO gave us a directive which seemed unduly restrictive to some of us, but which we all eventually came to understand as a key enabler of our success: TECHNOLOGY IS OFF THE TABLE! With tech off the table, all we could do was map the process, sort out the value from the waste, maximize the former, minimize the latter, pilot test and confirm our hypotheses, restructure and retrain the enrollment team accordingly, and track the metrics. The new process was implemented in less than three months and, running on legacy technology and paper forms, the new lead time was a sparkling 5-7 days, best-in-class!
Most importantly, that quick win served as the “ah-HA!” moment for leadership for fixing their long-suffering platform implementation project. Process analysis would now inform requirements gathering in a meaningful way, with internal process champions, not outside software consultants, leading the charge.
Reason 2: Techies with the best intentions can sometimes be “blinded with [information] science” (aka the Thomas Dolby Effect).
For some IT leaders and analysts, a predisposition toward technological solutions can cloud (pun intended) the ability to acknowledge larger strategic considerations and/or recognize non-technology opportunities for improvement. To these well-meaning folks, software solutions are simply “poetry in motion,” and the biggest problems are best “solved” by big technology solutions. In other words, the most fervent believers in IT magic can be the IT magicians themselves.
REMEDY
Reject the notion that technology expertise is the primary skill required to lead or facilitate an improvement project – even (or perhaps especially) for enterprise application implementation projects (ERP, CRM, etc).
RATIONALE
- Acknowledge that deep IT expertise can be accompanied by a deep bias toward technology-focused “pre-solutioning”.
- Non-technologist (or less technically-savvy) project leaders and facilitators may be more likely to pursue a tech-neutral, evidence-based, strategy-driven approach to problem solving. Favor lean (or other quality discipline) expertise and business acumen over IT expertise for project leadership and facilitation.
- When software solutions drive analysis, the focus is often on “users” and “fields”. By employing strategy-driven, evidence-based analysis, even in software implementation projects, the focus stays where it needs to be — on “stakeholders” and “customer value”.
- “Requirements gathering” does not equal adequate current state analysis – an important distinction not always recognized by IT leaders and analysts. Before asking what business requirements a software solution needs to address, business analysts should be asking these questions (among others):
- What is the real problem here? What are its root causes?
- Can the problem be broken down into smaller (and more readily understandable and solvable) chunks?
- Who is the customer? What does the customer value in this process?
- What are all the viable opportunities for improvement, and which ones should we pursue first?
REAL-LIFE EXAMPLE
On another health insurance engagement, the client was in the market for some slick new “provider credentialing” software. I was brought in to research the leading solutions (Magic Quadrants, Shazam!) and manage the implementation process. Luckily, the client didn’t get a burr in their saddle when I advised them to hold their horses on the software project.
First we assessed how providers generally were “managed” and found there were three distinct departments managing provider data using distinct, home-grown excel tools. Turf disputes were common, “siloed” processes persisted and cross-functional data sharing was anything but de-rigeur. Instead of focusing on software, our initial kaizens would be focused on team building and replacing arbitrary functional segregation with customer value-maximizing, flow-enhancing work cells. Addressing root cause conditions took some time, but would enable the later successful selection and implementation of a more comprehensive software solution.
Unfortunately I do not possess an accurate Return-On-Investment (ROI) divining rod that indicates whether any improvement project will deliver the goods as estimated. Without a doubt, technology-driven improvement initiatives fail for many reasons beyond than the two I describe. But I have observed that the average post-mortem analysis of a disappointing IT project often does not proceed beyond one or two levels of “why” inquiry and fails to reveal true root cause conditions. At its roots, I see a problem of technology-based biases and beliefs undermining the reliable, non-blinding science of evidence-based improvement methods.