AI That Fails Works
The oft-quoted anecdote that 80% of AI projects fail probably feels familiar to digital transformation folks. We explain how to reliably deploy AI that works using an approach we call Holistic AI.
At Frontier, even though we do hardcore AI using recent research, our approach to enterprise AI places greater emphasis upon the execution of long-lasting business value.
Putting hardcore AI in an enterprise is often like putting high-octane fuel into a worn-out car with dirty plugs and no service love. It might fly like the wind for a few hundred yards, but quickly stutter to a halt.
A common measure of AI is some benchmark, like number of documents classified per dollar. But this can be like viewing the speedo of the car during the high-octane surge.
CEOs have become tired of fast-slow-stop transformation, followed by more YAPP: Yet Another Pilot Project. Whilst early results are always “directionally useful” (beware this euphemism), come the QBR crunch, well, we’ll say no more.
Complex, not Complicated
Our car analogy is a bit misleading. A car, with its many parts, is complicated, whilst an enterprise is complex.
With each car part, we know exactly what it does in relation to the car’s function. But within an enterprise, we never quite know which initiative is moving the needle.
Many initiatives, including AI programs, fail in the “real” org as opposed to the “imaginary” one where project leaders conveniently ignore complex interactions.
The solution to deploying AI that works is for transformation to utilize all four of the following approaches:
- Design thinking – solving the right problem.
- Systems thinking – solving the right problem in context.
- Model thinking – using the right AI solution in the right way.
- Product-centric operations – systematically delivering the right and best outcome.
Combined, we call this Holistic AI.
Complexity grows rapidly and so any means to keep up must eventually recognize complexity rather than ignore it. There are good Complexity Theory reasons to back this, but we hope that the four approaches will make sufficient sense to merit consideration.
If you want to shortcut the article and ask for advice in using Holistic AI for your next project, feel free to reach out.
Design Thinking: AI that works on the right problem
Whilst design thinking is now widely practiced, be honest: when did you last use it on any AI initiative?
The essence of DT is to uncover the real org versus the imaginary one. For example, in the imaginary one, sales folks are given dashboards, which we suppose they will use.
In the real org, dashboards often don’t work. We would go so far as to call this the norm.
DT discovers the real use of dashboards within the context of daily routines, goals and incentives. All of these matter – the best dashboard gets no love if there are no incentives to use it.
Whilst DT is often used to discover custom-facing pain, its superpower is often in identifying employee pain-points, like dashboard dysfunction.
If you want AI that works, you must target pain-points with accuracy and actionable insights. Be aware: they are seldom what you think.
As the Chief Data Officer of Jaguar Group discovered, much to his surprise, the reason for many failed “data initiatives” was not, as expected, the notorious specter of “data quality”. User research revealed that most employees couldn’t use their data assets to get the intended results. This is poor design and not a recipe for AI that works.
The essence of DT is solving the right problem for the right reason.
Systems Thinking: AI that works in context
A complex enterprise requires special attention when trying to identify leverage points, such as which sales initiative to run or which process to automate.
ST says how leverage points are often counterintuitive – you need less of the thing you think you need more of. The canonical example from Donella Meadows (“Godmother of ST”) is how low-income housing can increase, not decrease, poverty.
ST analysis concerns causal loops: identifying what plausibly causes what. Clearly, sales performance has more than one cause, yet initiatives typically oversimplify by looking at causes in isolation.
For example, perhaps a particular defense strategy for competitive sales seems obvious. Early results seem convincing, so actions are amplified. However, lo and behold, come the QBR, results are hard to find, perhaps even negative.
One explanation is that focussing in one area causes another to suffer.
Another explanation is that over-emphasis of a particular solution has precluded the sales prospect from learning of other solutions. Those others might be more profitable in the long run, as in lifetime value (LTV). Yet such considerations might be absent without ST.
The value of ST is in helping to consider the whole problem – the real problem – not just a simplified proxy. With complex systems, simplification is often not the answer: it’s more of a coping mechanism. Beware of “low hanging fruit”.
The essence of ST is that it ensures that the right problem is solved within context.
Model Thinking: AI that works optimally
AI, and its broader cousin, Machine Learning (ML) come in different flavors. Understanding those flavors helps to understand what kind of solution to use for what kind of problem. More valuably, it helps workers to identify use cases for AI/ML.
Consider the familiar concept of Critical Thinking. To master CT, one first has to learn the types of fallacies and then practice how to spot them in a situational example.
Ditto with AI: one has to learn the patterns of AI and then learn how to spot commensurate opportunities in the enterprise.
If you don’t believe that AI solutions affect how we think about problems, take a look at Christoph Molnar’s book Modeling Mindsets.
At Frontier, we advocate widespread awareness of AI in order to utilize MT. Just as the spreadsheet has become the de facto numerical modeling tool, many AI/ML tools need to similarly become commonplace. This is especially so in the age of Generative AI where much of the technical heavy lifting can be done using AI itself (to build AI/ML solutions).
Many workers, with the right tools and MT know-how, could apply AI directly to their work without the need for data scientists and AI PhDs. We call this “Direct AI” and it is part of the MT approach. It is also highly scalable.
The essence of MT is helping to identify the right solution for the job.
Product-Centric Operations: AI that works to produce outcomes
PCO is what binds the above approaches into a holistic approach. Many AI initiatives fail due to a lack of operating model. Failure can often be as simple as not having a process for adoption of the shiny new AI tool. This is remarkably common.
A product-centric approach helps operations to focus on outcomes, not tools.
Product management includes a theory called “Jobs To Be Done”. In the case of a dashboard, a user might “hire” the dashboard to do a job: “allow me to predict which product configuration the customer is likely to buy”.
The dashboard might suggest product configurations via some widget. Assuming the widget is usable (often not), what if the SKU codes don’t align with a usable BOM? If the salesperson cannot produce a quotation due to misalignment with the price-quote tool (CPQ), then the dashboard has not delivered on its intended outcome.
Tracking is vital. Can we track suggestions from the dashboard, through the CPQ and down the sales pipeline? In product management: measuring outcomes is a must.
Methodologies, like Lean, can ensure efficient and timely confirmation of outcomes whilst avoiding the classic mistake of taking months to build elaborate solutions that fail to deliver. An Agile framework makes delivery efficient, controllable and measurable.
Agile dictates the use of retrospectives to encourage open and honest diagnosis of initiatives in a blame-free fashion, which is essential to long-term success.
The essence of PCO is systematically delivering the right and best outcome.
A Bundle of AIs that work together
In a fable, a man asks his children to break twigs, which they easily do. But once bundled together, the kids cannot break them.
There is a similar notion in AI concerning weak learners. Each, by itself, is weak, barely producing results. But when combined, they produce a strong effect.
With complex enterprises, this principle applies at the macro level. We could go into some theory, but the idea should seem obvious and scalable: empower employees to deploy as many local AI solutions as possible (using holistic approaches) such that the combined effect at the organizational level is robust.
This is only scalable via Direct AI, and will only work when following the above four approaches. Of course, there are lots of details to consider, which you can read about on our blog or discuss in a 1:1 consultation.