As I introduced earlier, AI is moving from a research endeavor to an applied art. Enterprises are starting to deploy AI to solve real-world problems. There are gaps as theory struggles to meet the challenges of practice, and these gaps lead to failed or underperforming AI and ML projects.

This situation is shown in the figure above, and the remainder of this post surveys these drivers and inhibitors. 

Note that the drivers are structural and relatively easy to understand. In contrast, the inhibitors to applied AI at scale are more subtle, as are described here. The field’s traditional focus has been on technology, leaving a gap around how AI technology should be best used within the organizations and teams that deploy it. This “outside of the box” perspective is critical for success going forward. 

AI Drivers

AI is growing at double-digit rates for a number of reasons:

  • Computation: Key developments that support the massive growth of AI include distributed software environments, on-demand computing, continued exponential increases in compute power per dollar spent, and computing hardware that is specialized for AI.
    We have achieved a billion times the computing power per dollar compared to 30 years ago. And, even as these Moore’s law effects seem to be slowing, specialized GPUs and AI-specific hardware is providing additional power. Cloud computing amplifies this effect by making compute resources available worldwide, on-demand, and with greatly reduced development, maintenance, deployment, and operations overhead.
  • Big data: We can store, process, transmit, and manage big data at much lower costs and greater speeds than ever before. And we are generating considerably more data. This data is a key enabler for AI, as more data results in better systems
  • Algorithms: The last decade has seen the maturation of new AI algorithms that are much faster than ever before. The most important include new natural language methods and deep learning including convolutional neural networks
  • Worldwide scientific communities: It is now possible for groups of scientists, engineers, researchers, developers, investors, and others to share knowledge, data, and code from all over the world instantaneously. This has led knowledge to increase exponentially. Significant breakthroughs that previously took a year or more to get published are now happening at the rate of one a week. So the “raw materials” for AI are in abundant supply like never before.
  • Programming languages and development environments: These are also key drivers, separate from the computing power itself. Languages like Pytorch and Keras/TensorFlow enable people to write AI-related code much more easily and in fewer lines of code. They can then be automatically deployed at scale through easy integrations with cloud computing systems. New development environments, like Jupyter Notebooks and Streamlit, enable data scientists to share code in new ways and to create running applications at scale without requiring dedicated production engineers. 
  • Modular applications and Microservices: Microservices are small elements of functionality that can be composed into new systems through integration software called Application Programmer Interfaces (APIs). For instance, AI modules for speech and face recognition are now available from companies like SAP and IBM. Microservices can also be deployed on serverless architectures, (also called “Functions as a Service (FaaS)”), where the end-user is freed from managing the hosting platform for the service. 

    Microservices represent a shift in AI business models, that allows developers to build AI modularly, thereby allowing them to separate upgrading an application from any AI logic embedded within it. This “separation of concerns” addresses a problem in older systems where a new AI model required an upgrade to the surrounding system as well, which creates risk and cost.

    Another advantage of microservices is that they make it possible to use AI elements and tools across multiple applications. When combined with the fact that it is now considerably easier to continuously improve the AI module, this leads to a much lower cost and risk of continuously improving the performance of applications that embed AI. 

In response to these factors, AI is experiencing massive growth: it is starting to impact all industries and all aspects of life—from the food we buy, the medical care we receive, to how companies manufacture goods, launch new products, and respond to service outages and customer complaints. 

In particular, a recent Gartner study of over 3,000 CIOs indicated that they expect significant change in the way they do their jobs as a result of digitization, and that artificial intelligence is among the top technology trends likely to be a catalyst for that change. 

Yet the study also reported AI growing pains: organizations large and small across all verticals are struggling to define, plan, and implement successful AI applications. AI hype is much greater than its reality: some say that “AI at scale is like sex in college: everyone is talking about it but few are doing it”. Indeed, large and complex AI projects can take as long as four years from conception to execution. And articles like this one that describe AI failure statistics and reasons are increasingly common.

The good news: AI projects can succeed at much higher rates and can be completed much faster. But not for the reasons that most experts in the field believe. The AI bottleneck has shifted from the need for better technology to better practices in how that technology is planned, used, deployed, assured, and maintained.

Inhibitors

AI has grown up with an academic mindset, which has led to a bias towards technology-based solutions to AI problems. This results in a false belief, which is that, to maximize AI success, we must hire the most published academics, from the best university, who have the best reputation in their academic field. Yet this is often a suboptimal path, because many of the criteria for academic success are radically different than what is needed to ensure applied AI success.

Typical AI teams are much less able to understand when the problem with a project lies outside of the data and algorithm, such as lack of market readiness, a competing non-AI product, or issues in the application, people, or processes that surround the AI subsystem. And, more often than not, this is where the problems lie.

This broader picture is, at heart, the realm of innovation. Innovation is not just invention and is not just about technology. It is a 360-degree, integrated discipline in which the inventors, the business owners, and product and system architects coordinate to achieve their goals. Yet for most AI projects, these disciplines do not adequately communicate, which means that AI deployments are less likely to achieve their intended benefits.

The solution is, in short, to mature the discipline of applied AI. There are fundamentally hard things about AI applications, success requires both art and science, and considering what I have called a number of “crucial questions”, which I’ll survey in the next post, and explore in more detail in remaining posts. By the time you’re done reading, you’ll be in possession of the keys to successful applied AI deployments.