Three mistakes in embedding AI in Robots
At RoboBusiness 2019, Neurala’s Max Versace will discuss a new paradigm of data collection and training for artificial intelligence.
(originally published here)
Too often, whether it’s an assumption about the power of artificial intelligence, or something seen in a Hollywood movie – roboticists and companies are making mistakes when implementing AI. For example, on the screen, someone downloads AI into a robot, and it wakes up completely autonomous, fully functional, knowing everything needed for a lifetime of service.
Back in the real world, things are more difficult. Companies are spending millions on AI with robotics, only to find that they need a different approach. Massimiliano “Max” Versace, Ph.D., the CEO and co-founder of Neurala, will discuss a new way to train AI data at RoboBusiness 2019, to be held Oct. 1-3, in Santa Clara, Calif.
Robotics Business Review recently caught up with Versace to discuss the talk and the big mistakes that robotics companies are making with AI.
Training AI differently
“Many companies approach robotics by devoting a ton of time to the body of the robot and the sensor package, and then figuring out the AI at the very end,” Versace said. “That’s a huge mistake – the biggest misconception that people have is that ‘I can train my AI in the factory and then deploy in the field and it’s going to work.’ That’s the paradigm that everybody’s following, and everybody is failing because in reality, that’s not how you feed the AI.”
Roboticists often underestimate the amount of time and money it takes to develop AI, but they also overestimate the intelligence of what the brain can do, Versace said. “So they believe that AI is a side dish, with the main portion being hardware, because that’s what they know and what they deal with on a daily basis,” he added. “That’s probably the biggest mistake, the warning that they need is that they need to consider AI at the source from the very beginning.”
A second mistake that robot companies and roboticists make is thinking that once an AI is trained and deployed, it will just continue to work until the end of time. Versace said that AI is designed to be like the human brain, to continuously learn. Instead of collecting a huge amount of data in the beginning, he said companies need to create initial AI models and infrastructure so it can evolve and refine its knowledge over time.
Continuous learning approach
“It’s like you’re having your last supper – the last meal that you’re ever going to eat – you’re going to eat the chicken with the whale and then the cow – you’re not going to eat forever,” he said. “When you change the paradigm, and have a meal three times per day – at that point you’re not going to have smaller portions. That’s the idea of continuous learning – where you can have the ability to continue to seamlessly update. You don’t need to ask for 7.3 trillion data points at the beginning, because tomorrow you will have 1.7 million more data points, and you can increase the procedure. You and I don’t learn everything at once – learning is distributed throughout the existence of the AI system, not in a one-time fee.”
A third mistake is the belief that AI will make robots smarter, more autonomous, and completely replacing humans in their work. The goal should be for robots to work with humans, taking on some tasks to make things more efficient.
“It’s not going to substitute 100% of the human efficiency, it’s going to supplement the human work for X number of hours, or take a little piece of the work that the human does,” he said. “[The assumption that robots] would substitute the complete human would be premature for where the industry is, and creates false expectations from the customer and end user that the robot is actually equivalent to a human, and we’re not there as an industry.
Versace gave an example of how AI can be helpful when added to a robot. Neurala has worked with Badger Technologies and its Marty robot, which scans grocery aisles looking for spills, among other tasks. Prior to the introduction of AI, the company would have a team of people looking at the media stream from the robot to check to see if there was a spill or not. “What the AI has done is reduced the amount of time that humans have to look at the video,” Versace said. Instead, the AI presents the team with images where there’s a high probability of a spill, and the humans don’t have to sift through images of very clean aisles. “The synergy works where AI complements the human takes a part of the job that is dull and just not interesting.”
For his RoboBusiness talk, Versace said he hopes that attendees will walk away thinking of a data as a continuous acquisition and refinement of the AI model, rather than a “last supper” that you can only collect at the beginning and not eat any more. “The software industry has moved to continuous iteration of software a long time ago,” said Versace. “AI should also move into the continuous collection and iteration improvement the same way.”