Seeing Facebook’s CEO summoned to Washington as a consequence of the improperly accessed the personal data of about 87 million Facebook users by Cambridge Analytica will be remembered as a powerful defining moment not only for Internet giants. The ripples and repercussion on this event will be felt, appropriately, in technology fields like Artificial Intelligence, designed to streamline extracting actionable information from data.
As CEO of an AI company, I will welcome that moment. I believe that, like any powerful technology, Artificial Intelligence will have a majority of good repercussion, flanked by some abuses, which regulations are there to prevent and eliminate.
Laws and regulations are important to shape how technology is developed and how it is used. I like to visualize regulations as boulders dropped on a river: they do not prevent the river to flow, eventually, to the ocean, but they modify its course and delay it, a bit.
In this case, the boulder is being dropped in EU first, and later on US (at least, that’s my hope).
The ‘boulder’ which will fall on AI is the ability to mine somebody’s personal information and connect it to ‘all that is known’, and possibly sell that info for a profit.
Traditionally, data (yours too!… do you own a Gmail of Facebook account?) is amassed on users of various services, Facebook being one of the many examples, and that mass of data can be mined with AI or other techniques, packaged, and sold to various other businesses.
There is an alternate path that the ‘AI river’ can take once this boulder is dropped, an AI that respects privacy. For instance, my company, Neurala, builds an AI that is created and learns directly on the device.
Unlike Facebook and Google, we do not ship your data to a server, but our AI can live directly on somebody’s device, such as a phone, learn to perform a function there, and does not need to know your name, or keep all your pictures, to be useful. It uses a technology, called Lifelong Deep Neural Networks (L-DNN), we have developed with NASA that enables to learn directly on the device without pinging the cloud. For this reason, your data is yours, but you can rip the benefit of AI, which is what all consumers want.
Ultimately, the benefit of AI outweigh the perils. Everybody wants the best AI – the best and safest self-driving car, the smartest Alexa, the most intelligent speech-to-text in your phone, the smartest AI-powered camera to take the most stunning pictures. But everybody also is protective of their privacy and does not want to be profiled, and their information commercialized. I don’t, and neither should you.
The good news is that AI that is respectful of your privacy can, and is today built.
As CEO of Neurala, I welcome these boulders to be dropped. Some companies will disappear, some will modify their course and perhaps become more honest, some will prosper and deliver AI that respects consumers.
That’s a win for AI and everybody that uses it.