Articles

Cloud vs Edge: an Industrial Manufacturing dilemma

When it comes to deploying AI for an application, manufacturers need to think deeply not only what to develop, but also the physical incarnation of the envisioned AI system.

Typically, Deep Learning — the subspecies of AI application that have gained the spotlight in the past few years thanks to their success in deploying real-world working AI at scale — is offered as Cloud-supported Software-As-A-Service (SaaS) platforms.

This does not work for many manufacturers, who tend to avoid the security and latency issues from Cloud-reliant software to manage the day-to-day complex workflows.

Fortunately, technology evolving, and in a 2020 that has otherwise been challenging for the sector, a paradigm-changing evolution of Deep Learning and AI is coming to benefit the need of manufacturing.

But first, let’s understand what we mean by Deep Learning and how hardware — Cloud vs Edge — plays such an important role.

Under the hood: AI

AI’s generic name hides a variety of approaches that spans from huge Artificial Intelligence models crunching data on a distributed cloud infrastructure, to tiny Edge-friendly AI that analyzes and mines data on small processors.

Let’s simplify the landscape and split AI into two main classes, the ‘heavy’ and ‘light’ types. Heavy AI requires large compute substrates to run, while the Light AI can do what Heavy AI is capable of on a small compute footprint.

The introduction of commodity processors such as GPUs, and later their portability, has make it technically and economically viable to bring AI/Deep Learning/DNN/Neural Networks algorithms to the Edge.

Edge AI computing is, intuitively, a great idea: mother nature has figured this out over eons of evolutionary time by moving some biological compute to the periphery, from our sensors (e.g., eyes, ears, skins) to our may organs and muscles, where much of the world ‘data’ is produced as organisms explore their world.

Similarly, it makes tons of sense for manufacturers to exploit this feature. Take, for instance, quality control cameras in industrial machines, where a typical machine can be processing at dozens of frames per seconds, hundreds of products per hour. It would be extremely wasteful and inefficient for these cameras to ship all the captured frames to a centralized Cloud for AI processing. Rather, a more intelligent strategy would be to process them at the Edge and occasionally send the pertinent, highly important frames (e.g., with a possible product defect) to a centralized location or to a human analyst.

Issues such as bandwidth, latency, and cost dictate the need for Edge processing.

There is an important caveat though: running AI on small compute Edge — what in jargon is called inference, or ‘predictions’ (e.g., I see a normal product vs. a defective one) is different from learning at the Edge. Namely, using the acquired information to change, improve, correct and refine the Edge AI is not only difficult, but extremely important for manufacturers to be able to customize their AI quickly to achieve flexibility

Edge (vs Cloud) Learning: a reality

I first realized the huge difference working with NASA back in 2010 when me and my colleagues implemented a small brain emulation to control a Mars-rover like device with AI that needed to be capable to run and learn at the Edge.

For NASA, it was important that a robot would be capable of learning ‘new things’ completely independently of any compute power available on Earth. Data bottleneck, latency, and a plethora of other issues made it vital to explore different breeds of AI than the one developed since that time. Algorithms that were capable to digest and learn — namely, adapt the behavior of AI the available data — without requiring huge amount of compute power, data, and time. Traditional Deep Neural Network (DNN) models were not capable to do that.

We built such and AI, which we dubbed Lifelong Deep Neural Network (Lifelong-DNN) for its ability to learn throughout its lifetime (vs traditional DNN that can only learn once, before deployment).

Little we know that this AI would turn useful more on Earth than on Mars.

The power of Edge Learning for Industrial Manufacturers

Edge learning solves one of the burning issues of today’s AI implementation: its inflexibility and lack of adaptability. AI algorithms can be trained on huge amount of data, when available, and be fairly robust if all data is captured for their training beforehand. Unfortunately, this is nothow manufacturing works, because data — e.g., bad product — is not usually available until the few bad products come out, unpredictably, from a line. AI needs to be able to quickly exploit this disparate, rare data to adapt, but traditional DNNs can’t work in these realistic conditions.

Novel approaches such as Lifelong-DNN — a Deep Learning paradigm that enables learning at the compute Edge, e.g., a CPU — enable AI-powered cameras not only to understand the data coming to them, but also adapt and learn. For example, in the industrial machine described above, Edge learning would enable its dozens cameras to learn new products types, and defects, in a real-world scenario where new items are introduced all the time, and new, previously unseen defects show up on the production floor.

No AI could exist that can be pre-trained on newly created products. Data simply does not exist: AI needs to be trained on the spot!

With Edge Learning, AI can learn to recognize new defects, without having to ‘reprogram’ the AI from scratch, directly at the Edge where it is needed.

AI that learns at the Edge is a paradigm-shifting technology that will finally empower AI to truly serve its purpose: shifting intelligence to the compute Edge where it is needed, at speeds, latency, and costs that make it affordable for every device.

This will enable manufacturers to build a fundamental brick of their Industry 4.0 strategy inexpensively, quickly, and directly where it counts: at the Edge.