What AI will look like in 2027. Hint: It’s all in your head

Pause for a second and look outside your window at a bird, a squirrel, or even an insect. These organisms all perform complex tasks that involve perceiving food and threats, navigating around trees, and following or hiding from other animals. There is not a robot or a drone on the planet that can do what these bugs and small animals can easily do.

While “natural” intelligence is rich and multi-purpose, today’s AI is still remarkably primitive. Currently, AI tools are “raw.” They’re designed and built (coded) for one special purpose and are relatively unsophisticated. For example, code and sensors that manage drones, self-driving cars, and toys are typically focused only on one task among the numerous at hand — such as navigation, object identification, or speech recognition.

For all of these applications to be similar to biological organisms, AI needs a “brain.” Current AI brainpower is designed and built to deliver narrow, isolated functions. You might call it stove-piped functionality. Each function, while certainly AI, is disconnected and isolated in terms of processing. This means that AI can beat a human chess champion but tends to fall apart when presented with new scenarios. Unlike the AI chess player, that human chess champion can not only play an effective game, but he can also do a number of other activities that require a tremendous amount of processing and judgment, like being able to stand up, drive a car, talk to his kids, listen to music, produce a painting, and more. So, when you make a direct and thorough comparison of AI’s ability to be “equal to humans” or even animals, you can see that we have a long, long way to go in artificial intelligence. We’re technology cavepeople.

The key difference between where we are with AI today versus where we will be in 2027 is that AI will function more like the human and animal brain, which are capable of so much more than today’s AI. Instead of stove-piped processing that parses discrete inputs, we walk through life taking into account multiple sensory modalities, and we make decisions based on multiple, simultaneous complex factors that help us achieve the best outcome.

Take, for example, the brain of a rat. Even the smallest animal brains have evolved to solve complex problems, such as enabling animals to forage for food, avoid predators, and interact with other animals. Even with a brain that weighs about two grams, the rat’s ability to combine navigation with visual, olfactory, and touch cues (via whiskers) means that it can accomplish tasks that include sensing, planning, navigation, and obstacle avoidance. These separate functions of the rat’s brain are all integrated and ultimately provide a “turnkey solution” for the task at hand. The secret of animal (and human) brains is that they have discovered a way to co-engineer these various skills in the same low-power package.

This sort of co-engineering is what we call a “whole brain” approach, and this new paradigm is where AI is headed. Integrated processing will become common, and the boundary between software, AI, and human/animal intelligence will blur. Just as the human or animal brain relies upon and incorporates multiple brain areas for efficient and autonomous operation, the AI of tomorrow will leverage integrated deep learning frameworks and edge processing to make AI increasingly real time.

With multiple AI functions built into the same package or single computing block, AI systems will achieve better, faster performance due to the synergies between systems. This will enable AI to accomplish abstract reasoning, allowing machines to execute sophisticated, non-intuitive actions that bring them a step closer to us.

For example, the blurred line between AI and software could apply to making transportation easier and safer. Today’s self-driving cars are designed with the stove-piped approach of adding one sensor or module at a time and then combining all these processing streams in the hope that they work. On the other hand, humans synergistically combine tactical vision — “Look out! There’s a pothole ahead!” — with:

  • High-level navigation: “I know that building; I usually turn right here.”
  • Long-range collision avoidance: “That car drives funny. I will keep my distance.”
  • High-level planning: “I better take that side road, because it may have less traffic.”

While a traditional approach would result in an unmanageable tangle of hard-to-integrate software and hardware components, whole-brain AI approaches co-design these components using the same building blocks throughout the whole brain: artificial neurons connected by simulated synapses, pretty much like brains do with their natural counterparts.

Another example is our work with NASA. When Neurala was working with NASA to design a “rat brain” to guide a Mars Rover on a simulated Mars environment, we followed this whole-brain approach, as we had a small amount of compute power to rely upon and could only afford a solution that would not just combine all these functions in one package, but would also do it efficiently.

After all, even today’s stove-piped AI is making the software and machines with which we interact so much better, delivering improved productivity in many parts of our lives. As AI begins to emulate advanced human and animal brain activity, it will become an increasingly useful tool, solving problems in real time and leveraging humanlike decision-making capabilities. In 10 years, the same sort of integrated processing that makes the lowly rat seem like a genius will be the kind of AI that delivers benefits to all.

Max Versace is the CEO of Neurala, a deep learning neural network software company, and founding director of the Neuromorphics Lab at Boston University.

Written by 

Leave a Reply