I wrote earlier about the importance of asking yourself some foundational questions about what alternatives your solution is competing with. I also wrote more specifically about competition with human / manual solutions, and about how it’s a good idea to seek the optimal mixture of AI, more traditional tech, and human beings. Sometimes, as it turns out, the best mix is no AI at all.

As I introduced earlier, in one AI project I headed up several years ago, the customer was extracting unstructured data from resumes. To do so, it had invested in a non-machine learning solution that wasn’t accurate enough. So it hired a team from India to look at the unstructured data generated by the system to check, validate, and correct the results of the automated solution.   

Thinking that machine learning would be a better approach than using this manual one, we were invited to work on this project, and created an ambitious plan for a complex and sophisticated machine learning solution. We weren’t sure how long it would take to build this state-of-the-art system, but were confident that we’d be able to improve its accuracy by a good amount.

Part way through the project, the client told us that they suspected that, though the accuracy wasn’t as high as it could have been, the human-in-the-loop setup it had originally thought was only was a stopgap was actually good enough. And the process was working smoothly. So our five-person AI dream team did a deeper analysis, reported that they agreed with the customer, and quit the project. 

Our reason: AI was the wrong tool for the job in the short term, and we couldn’t even guess at when they’d see a return on their investment from the higher accuracy and automation that ML could provide.

When an alternative solution that puts people (or traditional tech) in the loop is enough to solve what the customer sees as their problem, the upfront and ongoing cost of hiring and equipping people turns out to be the upper limit on what they’ll ever pay you for your state-of-the-art AI. Of course, if they need better accuracy than humans can provide, then that’s a different story. But you often don’t need Kaggle competition-winning accuracy to provide massive business value.

AI should always be considered within its larger context, which is usually a process or a decision within a process. Most AI practitioners assume that full automation of the AI decision is necessary. If the process is currently done by humans, even partially, then ask whether that’s really worthwhile. And it’s a good idea to step back and prioritize which processes you’ll start with and in what order will you address the rest? (A framework for prioritizing processes for AI is available here. It is focused on CFO responsibility areas, but provides a useful template that can be modified for other functions.)

Should we replace the existing AI system with a better one?

A related situation is when you’re looking to replace another AI system that is already in place.

Can you improve on that system’s performance enough to offset the cost in time, money, and disruption required to get there? Or can you integrate with that system to achieve an optimal result? To get to the answers to these questions, you need to establish the monetary value of a percent improvement to a specific metric/set of metrics over various timeframes to your specific customer. Everyone’s situation is different, so there’s really no such thing as a general-purpose return on investment (ROI); you’ll need to take the time to understand your specific customer’s cost and benefit structure.

A key principal here is that, although achieving only a 20% improvement in a theoretical model may doom an academic paper to oblivion, obtaining just 6% improvement in the business world could translate into millions of dollars in cost savings and save a company from failure. The moral of the story is that, despite it not being taught routinely as part of data science programs, context matters.

It’s not sexy, and probably won’t get you published in a machine learning journal, but the alternative to your solution might just be a somewhat brute-force but technically simple strategy of applying massive computing power and traditional statistical modelling combined with lots of humans validating the results. Regardless of whether a given competitor is “real AI,” if it solves the problem well enough and provides the desired return, I’ve found it’s often hard to justify the expense of applying the latest, cutting-edge AI methods.