(Article first appeared on Forbes)
Automation: A word that simultaneously evokes technological and societal progress and a deep sense of fear.
Manufacturers have been chasing automation for years through the implementation of Industry 4.0 initiatives. With each new robot, IoT or AI-powered device — the intersection of which is known as AIoT — manufacturers took another step toward automation. But, others feared that a human worker is at risk of losing their job for every machine, Artificial Intelligence-powered camera or robot that was introduced on the manufacturing floor.
The Covid-19 pandemic dramatically shifted the dialogue around automation. Cameras, machines and AI are now seen as allies as opposed to threats. These devices are a means to maintain business as usual in the face of challenges, such as social distancing, remote working, supply chain disruptions, unplanned shutdowns and remote working.
The reasons above have turbocharged the use of AI in a way that should only continue, even after the virus is finally tamed. But as AI is widely and readily adopted in the factory floor (i.e., supplying a supplementary set of eyes for quality inspections), a new set of problems are set to arise.
As society in general — and manufacturers specifically — become more comfortable with the idea of interacting and using AI as a part of everyday life and production cycles, some questions inevitably will stick around about the way AI works and the decisions it makes.
In essence, now that we’ve overcome the issue of whether AI is needed or not — with the answer being a resounding “yes” — the new pressing question is “why” AI makes its decisions. Namely, with AI now performing critical functions in industries from healthcare and e-commerce to cybersecurity and industrial manufacturing, businesses need more intelligible ways to characterize how and why AI takes a decision.
For example, look at the typical visual quality inspection process on a manufacturing floor. As images of normal and defective products are fed to an AI system and learned, it is possible to trace back and identify which components of the AI system have weighed in on the final determination of what is considered “normal” instead of “anomalous.” We can also determine and highlight in the image what ensemble of pixels, image features and, ultimately, products were responsible for that decision. In this sense, despite not being able to speak and describe the decision processes like a human inspector, AI systems can be interrogated and their decision understood and leveraged.
Explainable (or interpretable) AI is a fairly recent addition to the arsenal of AI techniques developed in the past several years. And today, it includes software code and a friendly user interface able to present workers with human-readable information on how a given piece of data (input) turned into a specific decision (output). Additionally, explainable AI is also very useful for accountability and auditability purposes: Understanding why an AI system makes a “defective” determination helps pinpoint flaws in the manufacturing floor and identify where to improve the overall process.
As more and more workers are being flanked by automated systems to cope with the new normal, AI needs not only to be effective but trusted. In the same way that we humans develop trust in our co-workers when they can articulate in an intelligible way how a decision has been reached, we need to learn to trust AI deployed in the factory floor. Establishing this trust will be the foundation for unlocking the true potential of AI and automation in manufacturing as well as all industries where humans and AI are working together.
Once you understand the need, the next thing to consider is how to practically augment an industrial AI system with explainability. Peering into a deep learning network, tracing the decision process and presenting the results in a reliable and human-readable format is no small feat. Developing this feature from scratch will most likely result in a large expenditure of time and capital, and a solution that is not scalable across the organization. Your best bet is to find a partner who can help. In looking for partners, business leaders should look to AI platforms and frameworks that natively include a way to integrate AI predictions that include explanations into the decision process.
While traditionally used in the context of understanding things like bias, AI explainability will need to evolve to be industry-specific — and in manufacturing, explainability will be a must-have to pave the way for wider adoption in a sector in dire need of new tools.