Articles

WHY AI THAT LIVES AND LEARNS ON THE DEVICE WILL SAVE OUR PRIVACY

Seeing Mark Zuckerberg summoned to Washington as a consequence of the improperly accessed personal data of millions of Facebook users by Cambridge Analytica marks a powerful defining moment of 2018 and a pivotal moment in our digital existence. Two fronts have clashed: the fast-moving pack of internet giants, harvesting and mining seas of data, and the static, slow-moving forces of politics and regulations. At the center of it all is artificial intelligence.

45690586 – privacy computer security on the internet lock icon data protection

Nobody wants to hand off their data unless they feel that their digital footprint is protected and guarded. This desire now is also supported by legislation. Europe’s new General Data Protection Regulation (GDPR) clearly states, among other things, that companies are to limit and minimize the amount of data they collect and keep, focusing on what is strictly necessary to fulfill clearly-stated business purposes. Companies will only be able to keep the data for a limited amount of time, and users can request a deletion of their data. We foresee some aspects of this protection to be extended rapidly in other countries, including the U.S.

That’s a big problem for the way AI is currently built and trained. Historically, AI systems have not been engineered in a way that is conducive to preserving privacy: they need to keep all the data to be able to be updated and improved. But luckily this can be overcome by a new kind of AI technology that lives directly on the user’s device and performs all functions right there without needing to save a backlog of the user’s sensitive information to be useful.

Traditionally, data of all kinds was amassed on users of various services, including Facebook, but companies didn’t have an efficient way to harvest it. Enter AI algorithms: data-driven mathematical models that learn from data and extract meaning from it. It’s like these companies were sitting on huge oil reservoirs with only a hand shovel, and now they have been given a powerful mechanized drill. And they are drilling.

The Backpropagation “Brain”

Developed from theoretical work that had its roots in the ’60s, today’s deep learning and neural network (DNN) algorithms—the sub-field of AI that is delivering the biggest wins for the research community—are large-scale mathematical systems that capture aspects of brain function and organization by simulating, in a simplistic way, the vast networks of interconnected neurons that can be trained to execute tasks. The

se vary from visual and auditory perception to motor control and more abstract functions—such as catching a network attack on a server; classifying financial data as fraudulent or legitimate; or classifying a piece of equipment as normal, defective or rusted.

While the nature of the input data and task varies, these systems derive their power from the ability to learn from the data (as opposed to being pre-programmed to perform a function) and overwhelmingly use a learning formalism crystallized in the ’80s called “backpropagation.”

In the backpropagation method, an algorithm that radically departs from the way human and animal brains work, neurons in a large neural network change their synapses or connectivity coefficients by calculating the error contribution of each neuron after a batch of data is processed by the network. In essence, if the network is presented a “giraffe” and replies, “I see a zebra,” millions of neurons will change hundreds of millions or billions of synaptic weights based on how much they contributed to the wrong answer, in a direction that will enable them, the next time they see the giraffe, to be more likely to classify it correctly.

The name “backpropagation” refers to the fact that the error of the network is computed at the output neurons, the ones that classify zebras versus giraffes, and propagated back to all neurons in the network that feed to those output neurons, all the way to the first neurons of the network that are presented the input image. The more neurons contributed to the wrong answer, the larger the correction in their synapses.

While input data can be images, sounds or other more abstract data—such as financial transactions, network traffic or text—the principle is the same: the algorithm optimizes the network output by iteratively adjusting the weight of each neuron, completing the learning process for that piece of data thousands or millions of times until the error is small enough to call learning “done.”

This is great, in that it enables AI systems based on backpropagation to match and sometimes surpass human-level performance in an ever-increasing list of tasks, from playing chess and Go to understanding traffic signs and medical data.

Today’s AI Has a Learning Disorder

However, this super-performance comes at a price: backpropagation networks, because of their tendency to change synaptic weight based on the current prediction error, are very sensitive to new information and are susceptible to catastrophic interference: when something new is learned, it wipes out old information. In a sense, they have a form of learning disorder.

Backpropagation became the de facto standard learning algorithm in neural networks, and the unintended consequence is that the whole sprawling AI industry is today suffering from “Memento syndrome”—as illustrated by the popular movie, their AI is only as good as it was trained before being fielded, and nothing new can be learned during daily operation.

But getting back to data privacy, backpropagation comes with another major drawback: all input data must be saved for re-training. For example, if a DNN has been trained on 1,000 images and needs to learn an additional image, then 1,001 images need to be presented for thousands or millions of iterations. What happens if those 1,000 images can’t be legally saved? Training can’t occur, and the network cannot be updated.

In essence, today’s AI technical requirements are orthogonal to what legislations such as GDPR are requiring.

Learning from the Master

Digital illustration of Synapse in color background

While current DNNs are hyper-simplified models of the brain, biology has a richer repertoire of tools at its disposal. Let’s take a human brain. What DNNs lump into “connections” or “weights” the brain unpacks into a plethora of substances and structures—neurotransmitters and synapses—which vary from small-molecule transmitters and neuropeptides that target different kinds of receptors and synapses. Small-molecule transmitters are direct actors on other neurons and more closely approximate—in a very simplified manner—what traditional AI and DNN are doing today.

Neuropeptides, or “modulators,” on the other hand, are mediating more subtle effects in their target.

Most of us have had access to some sort of standardized education, where we went to school to learn a set of important skills. When it’s time to go get a job, we take those skills and knowledge, and we quickly recognize that day-to-day learning is the most important way we get better at our jobs and progress in the workplace and society. As humans, we do this day after day, even into our old age. More importantly, we do this quickly: for most of what we store in memory, a few learning episodes are enough. This modality of learning is in stark contrast with traditional DNN, where the equivalent for humans would be that all we know is what we were taught in school, and nothing would be learned after.

The conjoint mathematical modeling of small-molecule and neuropeptide transmitters in new “lifelong” DNN (Lifelong-DNN™) architectures enables them to differentially express two temporal scales—fast and slow learning—that traditionally have been lumped into a single set of equations. This major innovation enables the AI to maintain the advantages of slow learning, which has been capitalized on by backpropagation, while also leveraging the advantages of fast learning.

Solving the Privacy Issue

Additionally, because it is so mathematically compact, the new paradigm of Lifelong-DNN enables lifelong learning directly on the device, where the chipset can be as inexpensive as the one powering a low-end smartphone and can work without Wi-Fi or cell phone connectivity and without needing to store all the training data on the phone or device.

This means that each data point is used only once in the learning process: millions of iterations are not required. When new data is presented, it can be discarded. Privacy is no longer an issue.

AI is a relatively young technology, and, as a field, it is only scratching the surface of what is possible. We should not forget that an infinitesimally small fraction of what is known from biological brains is coded into today’s end-user, working applications. Even L-DNN is very far from what biological brains can achieve in terms of information processing. But innovations like L-DNN are at the core of building AI that is both useful and respectful of people’s privacy.