Many AI projects are failing. The Economist’s Technology Quarterly last year stated, “…lately doubts have been creeping in about whether today’s AI technology is really as world-changing as it seems … and has failed to deliver on some of its proponents’ more grandiose promises.” (The Economist, 11 June 2020)
I’ve been thinking about why this is. How is it that, while the potential of AI seems so obvious, the reality is often so different? What could we learn from Slimmer AI’s 12 years of working hard to get AI to work in real-world situations?
Building AI products with different ventures, for many different customers, in different industries, has taught us a few things. We often got it wrong, and have the battle scars to prove it. But we have learned and evolved an approach that we know works.
We call it the “Lab to Impact 3x3 Approach”. In hindsight, this all seems common sense. As Voltaire said, “Common sense is not all that common.” and in applying AI, this seems especially true.
More often than not, the answer is (or should be): “No”. Applying AI where it is not the appropriate technology choice is a sure way to ensure your AI project/product will fail. We try to avoid this by forcing ourselves to carefully consider:
EXAMPLE: We were very happy with the output of a solution we’ve developed for a water supply company. Our model was 20% more accurate than their existing system in predicting water consumption. Only later did we realize this was not the “job to be done”, but that the job was to “minimize over-supply of water while ensuring there is never an under-supply”. This is a subtle difference, but our results would have been much more meaningful had we realized this and trained our model differently.
There is no “silver bullet” way to applying AI. While one can, and should, easily experiment with out-of-the-box solutions, don’t underestimate the depth of knowledge required to create an industrial-strength operational AI solution. In our experience, knowing what model to apply, is as important as knowing how to apply it. Our experience taught us to always:
EXAMPLE: When we developed the machine learning anomaly detection models for Sentinels (our venture that fights financial crime), our first approach failed. So did our second approach. Only after testing 9 different models, we landed on a solution combining 2 different ML approaches, that resulted in a 4x performance improvement.
Solving the AI problem is only half of the solution. If the AI product cannot be implemented, adopted, and accepted, then there is no impact. We ask three questions to help us avoid these pitfalls:
EXAMPLE: After months of development, we were thrilled with a machine learning classifier product we developed with a pharmacovigilance provider (detecting mentions of adverse drug effects in scientific literature). Our team was ready to put this into production. Great was our disappointment when we realized there were still months of work ahead to pass all the internal and external (including regulatory) audits and process adjustments. However, ensuring that all our AI results were explainable and auditable turned out to be a critical success factor. And the feeling of accomplishment was so much greater when we finally went live.
This 3x3 approach still serves us well today as we build new AI B2B products and ventures. Although not a surefire recipe for success, it helps us avoid typical pitfalls or succumb to shortcuts.
It is good that AI is under scrutiny. When and where and how to use machines to support human decisions should be very carefully considered.
I remain very strongly convinced that AI is the single most transformative technology of our time, and will continue to shape our lives, our work, and our future. Discussions about its limitations and appropriate implementation are essential for realizing AI’s true potential.
Follow us on LinkedIn and Twitter for more stories like this.