Articles

Solving AI’s energy challenge: learning at the compute Edge

AI unplugged: rethinking energy use with Edge Learning

This morning, as I sipped my espresso and watched my dog Noah chase — and fail to catch — a squirrel in my yard, I couldn’t help but be struck by the spectacular efficiency of their brains in action. Powered by just a few nuts per day, a squirrel can perceive dangers, plan an escape path, and continuously correct it until safety is reched. And, next time my dog Noah will be in the vicinity, the squirrel will have “learned the lesson” and will be more efficient in avoiding it.

Despite what you read today from the misinformed “AI experts” that have flooded the AI arena in search of gold, the latest AI models are trailing behind massively in several dimensions with respect to their biological counterparts. One key aspect is training (or learning in…) these models, with a typical run of ChatGPT or Claude requiring enough electricity to power a small city.

Today, as the utility of artificial intelligence (AI) has expanded, the case for their deployment is so compelling that it at the basis of the dramatic expansion of computational resources dedicated to AI. This, coupled with the ever-increasing complexity of these models, has led to substantial energy demands for training these systems. Traditionally, the industry’s solution to this challenge has been the construction of massive data centers, some even powered by unconventional energy sources like nuclear power, to meet the computational and energy requirements of AI training.

The escalating energy demands of AI

The process of training large AI models, particularly in the domain of natural language processing, is a massively energy-intensive process, relies on an algorithm introduced in the 80s called Backpropagation, based on an iterative approach: learning occurs by presenting multiple times each item in a massive datasets until the “AI gets it”. Everyone knows that repeating the same time over and over is far from efficient… for instance, the training of GPT-3, a 175 billion parameter language model, consumed 1,287 megawatt-hours of energy, resulting in an estimated carbon footprint of 552 metric tons of CO2 equivalent emissions. This substantial energy consumption and associated environmental impact have raised concerns about the sustainability of AI development.

Stuck with this learning paradigm, and to address these AI-fueled energy demands, the industry has relied heavily on the construction of large, centralized data centers, and companies like Amazon have even ventured into the acquisition of nuclear-powered data center campuses, underscoring the scale of energy required for AI training. While these data centers offer the necessary computational resources, they come with significant environmental and operational costs.

Wait… how did the squirrel learn to avoid my dog in one single shot (no potentially deadly iterations!) with the energy of digesting a couple of nuts?……

From inferring to learning at the Compute Edge

Biological brains achieve their efficiency without resorting to data centers. How do they do that? Evolution has figured out how to achieve just that with an algorithm much more efficient than Backpropagation. The closest AI research has gotten to this Holy AI Learning Grail is with a category of learning algorithms called Continual (or Lifelong) learning. These algorithms learn “a data point at a time” by consuming as little power as AI consumes at inference — way less than what AI systems do today. An implementation of Continual Learning, called Lifelong-DNN (L-DNN), inspired by brain neurophysiology, is able to add new information on the fly.

That’s the equivalent of the squirrel learning of the existence of my dog Noah. One time was enough…

Unlike Backpropagation, L-DNN mimics biological brains by using a completely different methodology where iterative processes typical of Backpropagation are mathematically approximated by instantaneous ones, in an architecture that introduces new processes, layers, and dynamics with respect to traditional DNNs. When it comes to training, you only train once on every piece of data you encounter. This translates into massive gains in training speed, where, on the same hardware, L-DNN can train between 10K to 50K faster than a traditional DNN.

Edge AI, a paradigm that involves processing data and performing computations closer to the source rather than relying on centralized cloud infrastructure, has emerged as a promising solution to mitigate the energy consumption challenges associated with AI. By reducing the need for transmitting large volumes of data to centralized servers, Edge AI not only enhances privacy and reduces latency but also significantly decreases energy usage.

However, performing an inference at the Edge is not enough. A new era in AI efficiency is unlocked when not only inference, but also learning, are performed at the Edge.

Leveraging Edge Learning for a sustainable AI ecosystem

Lifelong Deep Neural Networks (L-DNN) offer a compelling approach to enabling sustainable AI on the Edge. Unlike traditional deep neural networks (DNNs), which require extensive training on large datasets and are prone to catastrophic forgetting, L-DNN technology facilitates on-device, incremental learning. This capability allows AI models to continuously learn and adapt without the need for frequent retraining on centralized servers, thereby reducing the reliance on energy-intensive cloud data centers.

L-DNN technology is particularly well-suited for Edge AI applications due to its ability to learn from limited data samples and its energy-efficient training process. By enabling on-device learning and inference, L-DNN minimizes the energy consumption associated with transmitting data to and from centralized servers, contributing to a more sustainable AI ecosystem.

An example of how AI is moving to the compute Edge is exemplified by the collaboration between Neurala, a pioneering company in the field of AI capable of learning at the Edge, and Lattice Semiconductor, with the goal of bringing AI learning capabilities to the Edge. This partnership aims to integrate Neurala’s L-DNN technology into Lattice’s low-power, highly flexible Field-programmable gate arrays (FPGAs), enabling AI learning and inference directly on devices, and by doing so significantly reduce the energy footprint associated with AI training and inference, paving the way for more sustainable AI deployments.

This means that a low-cost camera equipped with a relatively inexpensive FPGA could become an “AI-powered device”, and for some use cases bypassing completely the need of specialized processors downstream from the FPGA, resulting in affordable AI computing at a power envelope that affords low-form factor, and overall lower cost.

A Concrete Example: Edge AI for visual inspections in industrial manufacturing

The applications of Deep Learning and more, machine vision and more in general AI for industrial manufacturing are vast, ranging from quality control and defect detection to predictive maintenance and process optimization. Neurala’s deployment of L-DNN technology for visual inspections in this domain showcases the energy efficiency and operational benefits achievable through Edge AI.

By leveraging L-DNN, Neurala Visual Inspection Automation (VIA) can perform on-device learning and adapt to new scenarios without the need for frequent retraining on centralized servers. This capability not only reduces energy consumption but also enhances operational efficiency by enabling real-time adjustments to inspection processes based on the specific manufacturing environment.

Furthermore, the privacy and security benefits of Edge AI are particularly valuable in industrial settings, where data protection and intellectual property concerns are paramount. By processing data locally and minimizing data transmission, Neurala’s Edge Learning-based machine vision solutions ensure that sensitive information remains within the confines of the manufacturing facility, mitigating potential data breaches and enhancing overall security.

We need Edge Learning, now

There is an immense potential and need for Edge Learning in AI to revolutionize various industries by offering sustainable, energy-efficient, local, privacy-friendly AI solutions. From manufacturing to healthcare and transportation to smart cities and agriculture, the ability to process and interpret data locally, while minimizing energy consumption and ensuring data privacy, opens up new avenues for AI adoption.

As industries across the globe embrace AI, there will be a parallel escalation in the energy demands for AI training and inference, posing significant environmental challenges. Low-power chipsets (e.g. FPGAs) coupled with Edge Learning will be essential in driving the adoption of energy-efficient AI solutions. By combining cutting-Edge algorithms with low-power hardware platforms, these solutions pave the way for a future where AI can be adopted at scale without compromising environmental sustainability or operational efficiency.