Taking AI from the lab to the real world

The business is keen to move AI from the lab to the field, where it will presumably usher in a new era of efficiency and profitability. But it turns out that AI behaves quite differently on the testbed than it does in the actual world, so this is not as simple as it seems.

Overcoming the barrier between the lab and real-world applications is increasingly becoming the next important goal in the race to deploy AI. Because intelligent technology relies on a consistent stream of trustworthy data to work effectively, a controlled setting isn’t always the best place to test software. With AI, the actual test is now an uncontrolled environment, and many models are failing.

The ‘Death Valley’

Crossing this “Valley of Death” has grown so important that several companies have made it a fundamental ability for executives. Valerie Bécaert, senior director of research and scientific programs at ServiceNow’s Advanced Technology Group (ATG), is now in charge of the company’s efforts to close the gap. As she previously told Workflow, it’s not simply an issue of correctly training AI; it’s also a matter of changing company culture to increase AI abilities and create higher risk acceptance.

One method the company is experimenting with is training AI with little data so that it may discover new facts on its own. After all, real-world data contexts are substantially different from lab conditions, with data flowing in from a variety of sources. Low-data learning offers a streamlined approach to more effective models that can draw more complicated conclusions based on their learned knowledge, rather than merely tossing primitive models into this chaotic environment.

Leading AI practitioners — those who can credit 20% of EBIT to AI, according to McKinsey & Co. – are moving projects into production gradually and reliably, according to new research by McKinsey & Co. The following are some of the company’s basic best practices:

When creating tools, use design thinking

  • Internally test performance before deployment, then follow the performance in production to verify that results are improving steadily.
  • Create clear data governance procedures and policies.
  • Develop the AI talents of technology professionals.

Other research suggests that when it comes to putting AI in production applications, the cloud has an edge. Aside from its scalability, the cloud also provides a diverse set of tools and capabilities, such as natural language understanding (NLU) and face recognition.

The precision and accuracy of AI

Still, the AI model is a part of the challenge of bringing AI into production. Harshil Patel, an Android developer, recently said on Neptune that most models produce forecasts with high accuracy but poor precision. This is a challenge with business models that need precise measurements with limited room for mistakes.

To combat this, companies must take more care in the training process to exclude outlier data sets, as well as employ continuous monitoring to ensure that bias and variation do not enter into the model over time. Class imbalance is another problem, which happens when instances of one class are more numerous than those of another. This may distort outcomes away from real-world experiences, especially when additional domains’ data sets are added.

According to Andrew NG, adjunct professor at Stamford University and creator of deeplearning.ai, there are cultural considerations to consider in addition to technical barriers to production-ready AI. AI has a tendency to disrupt the work of a variety of company stakeholders. Hundreds of hours of development and training would be wasted if they didn’t buy in. This is why AI initiatives must not only be functional and beneficial to individuals who will use them, but they must also be understandable. The first stage in every project should be establishing the scope, which involves bringing together technical and business teams to figure out where “what AI can accomplish” intersects with “what is most important to the business.”

There are several instances of solutions in quest of problems throughout the history of technology. Because AI is so adaptable, a bad solution may be swiftly reconfigured and re-deployed, but this can be expensive and ineffective if the proper lessons are not learnt from the errors.

The challenge for the enterprise as it moves forward with AI will not be to push the technology to its theoretical limits but to ensure that the effort put into developing and training AI models is focused on solving real-world problems today while also ensuring that they can pivot to future problems.