In the fourth of our series of A3 interviews with AI leaders, Max Versace, CEO of Neurala, discusses how integrating deep learning into a facility can benefit the manufacturing industry. Versace, who sits on A3’s Artificial Intelligence Technology Strategy Board, recommends a successful proof-of-concept as a starting point. Check out his advice and case study on using vision AI on reducing error and increasing efficiency on the product line.
How would you advise companies to choose their artificial intelligence projects – and what questions do they need to answer before they begin?
The fundamental question to answer is “why do I need AI?”. Today more than ever, AI is a misunderstood buzzword, and very often companies wanting to stay relevant kickstart an “AI project” without putting a solid plan in place as to “why”. What happens when the AI experiment succeed? What is the return-on-investment envisioned for the AI? How would it be measured? Companies that answer these questions ahead of time will be the ones that, once the Proof Of Concept (PoC) is successful, they will be best equipped to turn this into a real deployed technology vs a costly – and, fundamentally, useless – experiment.
How much talk about AI right now if hype vs. reality? Where is AI having the most impact now in manufacturing and automation? What are some of the effective real-work use-cases for AI that are being deployed today?
Today, manufacturers are embracing AI and machine learning at an unprecedented scale: 79 percent of surveyed companies in this study are using machine learning specifically to help automate tasks, and among new technologies, AI, Neural Networks and Deep Learning in particular, is a prime candidate to be integrated into existing workflows and processes. AI can also be integrated into human resources, to cope with lack of personnel and ensure persistent quality control, proactive maintenance, and boost overall quality in an increasingly competitive landscape.
Smart automation systems generate mountains of data. How does a company develop a data strategy that can manage this river of information and leverage it in its operations?
Smart automation systems generate mountains of data. How does a company develop a data strategy that can manage this river of information and leverage it in its operations?
There is little AI can do without data – as a comparison, it would be like asking a human being to tell what she’s seeing or hearing with eyes closed and ears with earplugs! Good data is key, but collecting data, which implies new hardware/software to be put in place, is just the 1st step. The more important step is to understand what is the path from the data all the way to the value, or, in other words, what is the final goal of data collection. For example, in case of visual AI, the final goal is to enable a software to process that data in real-time to monitor production quality. Knowing that, data will need to be collected in a way that makes it conducive to the elaboration from a specific software. This knowledge will inform the choices of how to collect that data, namely, do it in a way that enables the software to do its job. This might involve choices of specific cameras, illumination, Industrial PC, networking equipment, data storage, etc. Similarly, if the data in need to be processed is coming, say, from a plethora of sensors in an industrial machine for the purpose of Predictive Maintenance (namely, understand when and how production goes awry before it’s too late to fix quality issues), then a whole new set of requirements needs to be put in place. Having the end-goal in mind is key. Finally, there scenario where Manufacturers will use Cloud services to manage runtime AI operations is a chimera, and this illusion needs to be dispelled. AI – and its data – will be locally stored and processed in the Manufacturer’s site, for latency, privacy, and security issues.
Can you share an AI or smart automation success story with us?
AI has graduated from a promising technology into a working product/solution deployed in the field. For instance, my company, Neurala, is working with the frozen food manufacturer apetito to inspect 1 million meals per week with AI. apetito is the leading food producer for the health and social care sector, supplying meals to the elderly and some of the most vulnerable in society. Great quality food, meeting various nutritional requirements and dietary needs is at the heart of what they do. Ensuring that the customer receives the right meal, with all elements in the right proportions is crucial – hence the focus on working with AI, and reduce complaints about missing meal components. They needed a solution that could efficiently detect errors in the products coming off the line, without compromising efficiency or cost – better yet, improving both. They began utilizing Neurala VIA (Vision Inspection Automation), and today apetito is confidently delivering the highest quality product to their customers, visually inspecting 100% of their production with AI. This is just one of many examples of real-world AI deployments that create huge value for manufacturers and their customers.
There’s been a lot of talk about pilot purgatory for AI projects. Companies get a solution running in a lab or a small pilot. But bringing it – at scale – into the real world can be a challenge. How do you overcome this?
Oh … Proof of Concept (PoC)! Fewer words simultaneously inspire both hope (PoCs are almost always a first inescapable step in the AI deployment journey) and despair (so many PoCs work, and companies have no clue what to do with them). If a manufacturer has succeeded in using an AI platform, collected some data, and design a working PoC – either solo, or with the help of a System Integrator – their biggest trap is to assume that there is only one small steps away from deploying a feasible solution.
The truth is that AI adoption in a production workflow requires clear success criteria, and a multi-step approach. While the first step is often a PoC, there have been countless that fall short of implementation for reasons that have little to do with AI, and much to do with the right planning. To avoid wasted time and money, organizations need to define clear criteria and timeline in advance to decide whether the tech should go into production. A simple benchmark such as, “If the PoC delivers X at Y functionality, then we’ll launch it here and here by this time,” would go a long way in terms of helping enterprises define an actual deployment scenario.
In this crucial step, once a clear Return On Investment (ROI) is defined, then the next delicate step is the selection of the right infrastructure – both software, and hardware. One common myth is that one needs a “centralized, massive AI infrastructure with tons of GPUs” to get AI to work. This is today far from the truth. Platforms exist that provide, in a small form factor and with very manageable expenses, all the toolsets that sidestep massive IT projects. And which is crucial, are flexible enough to be a toolset that could solve the a very large number of vision inspection problems across multiple use cases. Also, AI in many cases can run as well in CPUs as GPUs, which is a fundamental difference in an age where Manufacturers find themselves bidding for unobtainable GPUs alongside crypto miners and gamers.
AI is about delivering value NOW, not in 18 months after a big IT project. Finally, when it comes to IT infrastructure and AI workflow need to be considered, and one additional myth needs to be dispelled.
What AI application for industry are you most excited about and why?
What excites me the most are applications where AI becomes an integral part of manufacturers’ workflows, a daily activity/task that is as natural for us to write an email or open an app on our phone.
Today, top AI products and software exist that use Continual (or Lifelong) Learning to enable machine operators to quickly – in seconds or minutes – train AI and deploy in a production setting. This is key, since products and processes are constantly evolving, and there will never be any AI that works off-the-shelf – an AI-pre-trained on “all possible products” a company will ever manufacture. Rather, manufacturers will need the ability to build, customize, and continuously update AI autonomously, possibly without having to spend thousands of dollars on an expert to come and retrain it when something changes.
AI only for the big players? How do small and medium-size companies take advantage of these technologies? How do we democratizing the deployment of AI and smart automation?
Manufacturers big and small have faced unprecedented challenges to keep production running – supply chain disruption, fluctuating demand and workforce scarcity. Some are still trying to ramp production back up to pre-pandemic levels. Achieving desired production or product quality goals through innovation can be challenging, but vision AI is already providing great value to manufacturers of all sizes.
Vision AI software solutions are providing manufacturers a lower upfront investment for a solution that is faster to train making it not only pay for itself faster, it can also be applied to a much wider set of use cases; from the raw materials coming in, to the palletizing of products shipping out. Additionally, vision AI software can be deployed quickly without AI expertise, with minimal or no upfront investment results in a shortened ROI timeframe of months rather than years. Even business cases that could not support a more expensive solution, due to shorter production runs, lower volume, or very low failure rates, are easy to justify when implementing the right vision AI software.
How are companies successfully addressing the lack of trained AI expertise in the workforce?
One of the very first challenges a company looking to solve a problem with AI will bump into is… understanding what AI is and can do for them. The first instinct would be for the head of Quality Control to jump in her car and drive to the nearby University, walking around campus like a ghost, asking “can you please point me to the AI department?”.
With U.S. universities as a whole graduating around 3,000 PhDs in AI-related fields per year, with a median of 5.8 years to complete a PhD, finding a ready-to-go PhD to be hired – one that has some domain knowledge and ability to deliver before 1-2 years of training on the subject of visual inspections in a Manufacturing context – has very little chance of success. This is the 1st myth to dispel: today, AI PhDs are not needed, neither to get started, nor to get deployed in a final solution. Software platforms are finally available to simplify complex AI problems, providing the needed integration hooks, hardware flexibility, ease of use by non-expert, and, crucially, a very low-cost entry point to make this technology ubiquitously available to Manufacturers and System Integrators.
Hear more of Neurala’s advice on maximizing use of AI in your facility at the AI & Smart Automation Conference on September 29 in Columbus, OH.
Max continues to lead the world of intelligent devices after his pioneering breakthroughs in brain-inspired computing. He has spoken at numerous events including a keynote at Mobile World Congress Drone Summit, TedX, NASA, the Pentagon, GTC, InterDrone, GE, Air Force Research Labs, HP, iRobot, Samsung, LG, Qualcomm, Ericsson, BAE Systems, AI World, ABB and Accenture among many others. His work has been featured in TIME, IEEE Spectrum, CNN, MSNBC, The Boston Globe, The Chicago Tribune, Fortune, TechCrunch, VentureBeat, Nasdaq, Associated Press and hundreds more. He holds several patents and two PhDs: Cognitive and Neural Systems, Boston University; Experimental Psychology, University of Trieste, Italy. Max’s appetite for advanced technology rivals his anachronistic music taste, saying “I don’t listen to Beethoven, he’s too commercial.”