Articles

Form Computer Vision to Deep Learning: the AI path to innovation in industrial applications

In the race to enable manufacturing plants to increase production in the face of an intermittent human workforce, manufacturers are looking at how to supplement their cameras with AI to give human inspectors the ability to spot defective products immediately and correct the problem.

While machine vision has been around for more than 60 years, the recent surge in the popularity of deep learning has elevated this sometimes misunderstood technology to the attention of major manufacturers globally. As CEO of Neurala, a deep learning software company, I’ve seen how deep learning is a natural next step from machine vision, and has the potential to drive innovation for manufacturers. 

How does deep learning differ from machine vision, and how can manufacturers leverage this natural evolution of camera technology to cope with real-world demands?

Machine Vision: When Simple Is Just Too Simple

In the 1960s, several groups of scientists, many of them in the Boston area, set forth to solve “the machine vision problem.” The approach was simple but powerful: Scientists proposed a framework where machine vision systems were characterized by two steps.

In the first, the scientist decides which simple features — edges, curves, color patches, corners and other salient key points in images — are important for an image. In the second, they devise a classifier, usually hand-tuning several “thresholds” (for instance, how much “red” and “curvature” classify an object as a “red apple”) that automatically weighs these features and decides to which object they belong. While this approach was nowhere near a complete characterization of the power of human vision, it was simple and effective enough to survive for 50 years virtually unchanged.

In this original form, it enabled a plethora of real-world applications, and became a critical part of manufacturing applications, powering quality control deployments ever since. 

In a visual inspection example, a machine vision system may be deployed to search for defects in an image of a product. The first step will usually sample images of the product by computing contrast, edges, colors and other features, as they may be indicative of defects in the object. The classifier — the second step — will be hand-tuned by the quality inspector to determine if the product has enough “suspicious features” to make a final determination of damage.

This approach is simple and powerful in some cases, yet quite ineffective in many others, as it fails in situations where the difference between good product attributes and defects are highly qualitative, subtle and variable. Yet this is the nature of the world we live in.

Machine vision’s answer: Create more features and thresholds with a steady climb to higher complexity that makes these systems extremely complicated to tune even for the most experienced engineer and operators.

The Path To Deep Learning: Shifting Intelligence From Human To Software

In the ’80s, while machine vision was all the rage, a small subset of scientists interested in bridging the gap between biological systems and machines started to tinker with the idea of mimicking neurons and their architecture in the brain’s visual system. The goal was to better understand how we perceive, and along the way, design machines that “see” better.

During those years, the precursors of today’s deep learning models were developed. The key: self-organization. Importantly, these models and later deep learning cousins did not rely on the two hand-tuned steps of traditional machine vision. Instead, they shifted the burden on finding (learning) those features and thresholds from the scientist to the deep learning model. Scientists still had to use their brains to devise equations that enabled this generalized learning directly from the data, but now it only had to be done once.

This is really the key to deep learning: One does not need to handcraft a machine vision model for every case, but rather devise a learning machine that can be taught virtually anything directly from data, whether to classify fruits, airplanes or products in a machine.

Deep Inspections: Bringing The Power And Flexibility Of AI To Every Manufacturing Camera

In the machine vision-dominated world of quality control, deep learning represents a vital innovation, in particular in times where more and more needs to be done with fewer people.

With machines able to produce extremely variable, always-changing products at rates that can easily surpass 60 items per minute, deep learning is changing the machine vision landscape, especially with products that incorporate edge learning (or ability to learn directly in the camera/machines). 

Deep learning running at edge nodes in machines today enables dozens of cameras to learn new item types and defects in a variable production environment where new items are constantly introduced, and new, previously unseen defects show up on the line. Machine vision could not tackle this task — there are too many specialized, hand-tuned features and thresholds, each product coming with its own very complicated set of requirements. Deep learning brings down the cost and time to optimize quality inspection to a level that makes it technically and economically feasible for manufacturers of all kinds.

Deep learning is a paradigm-shifting technology that is powering a clear path to the Industry 4.0 revolution by shifting intelligence from the engineer and quality inspector to a piece of software continuously operating at compute edge where it is needed, at speeds, latency and costs that make it possible to efficiently achieve 100% inspection. 

While machine vision has served its purpose, deep learning-enabled cameras will bring innovation to a sector that has never been more in need of breakthroughs.