ArticlesTalks

AI: the opacity myth and the rise of explainability

“A mystery is only a mystery when you don’t know the answer. Once you do, it’s just a fact,”

(An ancient philosopher? No, just Max Versace, 2024)

I had the pleasure to speak at “AI: What’s the Hype? Legal and Ethical Implications”, a wonderful event organized by PIB – Italian Professionals in Boston (thank you Francesca Seta, PhD, Cesare Ferri, Giovanni Abbadessa, Samuele Bazzacco, Vittoria Ballerini), sharing the stage with two expert in Law, Elettra Bietti and Stacey Dogan.

Many great topics were raised, and in the coming weeks, I will dive into some key takeaways. First up: explainability. One of the most critical and misunderstood aspects of AI today is explainability, namely, understanding WHY AI systems make the decisions they do. This is not just a technical issue, but it is central to how we trust and interact with AI.

When the box is not that black…

While AI is really the topic that I have been working on for the past 25 years, and it is more clear to me how it works than, say, a radio or a nuclear reactor, it is also clear that, due to the surge in its popularity, many are now paying attention to it, and trying to understand its deepest workings as fast as possible (faster than AI will develop, I think that’s the reasoning?…). Not easy, as it’s a technical topic that requires study and is not amenable to a “quick read”.

One of the myths and misunderstanding around this tech is that AI as a black box, something that just works but nobody really knows how.

This is not quite right. AI isn’t magic, it’s a man-made object. I have built many of those objects over 25 years, some really simple – e.g., 5-neuron-networks able to solve simple logic tasks – to some very complicated – billions of neurons in brail-like structures able to drive a robot and have it behave like a small animal.

Just like any piece of technology, the smartphone you are reading this article on, or the car in your garage, AI is a tool we designed, built, and tuned. Since it’s human-made, its opacity is and always will be far less than many of the other natural phenomena we have spent centuries studying, like biology, the weather, or the universe.

Speaking of the universe….

AI is not alone

Is AI is the only system/tech we use and don’t fully understand fully?

No, this is false, and if we want to have a truly unbiased approach to AI, we need to.. in fact, remove biases and take a broader look.

Take quantum mechanics, a field that has been developed over the past 100+ years and has given us some of the most transformative technologies we use today — everything from semiconductors to lasers, MRI machines, to GPS. It’s a science that has broadened and deepened our understanding of the universe, allowing us to describe phenomena at the smallest scales with incredible precision, and a tech that we use countless times every day. It helped my write this piece on a PC, using a processor which could not have been built without understanding of quantum phenomena.

Yet, nobody really knows WHY it works.

That’s right. If you ask even the most skilled physicist why quantum mechanics behaves the way it does, you might be met with a shrug. As Richard Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.” The mainstream Copenhagen interpretation (yes, there are many interpretations of it, since nobody knows WHY it works, in fact) recites: “Shut up and calculate.” In other words, we may not fully grasp the philosophical “why” behind quantum behavior. However, what we do know is that quantum mechanics works, and we use it confidently every day.

The same applies to AI. We may not always be able to explain every decision AI takes in layman’s terms, but just like quantum mechanics, today we have built AI systems that deliver results, and those results are explainable and usable in ways that profoundly impact our lives.

But we should not settle for that, right? We can do better, since AI is man-made (unlike the universe described by quantum physics)

Explainability does not work everywhere!! Not so fast

Explainability is a new concept in the world of AI. This concept was not always important because, in the early days, AI didn’t work all that well. Many years ago, nobody was asking impatiently for an explanation of why the AI .. did not work!…. As we were building AI trying to make it work, often with results that weren’t worth explaining.

But as with every technology where many smart people work tirelessly for many years and billions are invested, then something happened (magic?): AI got better. As networks started producing results that could be useful, solve problems, and supplementing humans in making fast and complex decisions, that’s when the questions started to be relevant.

At Neurala, we saw firsthand how the demand for explainability only became urgent once our AI started performing well. When we first developed AI for visually inspecting products on the factory floor, our customers were not concerned with why it wasn’t working!! They just wanted it to function and be useful. And as soon as we crossed that threshold and AI began delivering strong results, the questions started coming: “Why did your AI classify this product as defective?”

Explainability became urgent only when AI proved itself worthy of being explained. And we introduce it in our product in around 2020 and launched in 2021.

While AI systems are complex, large systems consists of billions of equations and parameters, and process large amounts of data through multiple layers of computation to reach conclusions, it does not mean that this complexity has to equal opacity.

If we at Neurala have found ways to explain complex systems, other will, in particular when the demand for this feature reaches the level when you can’t ignore it.

In other words: it became a product feature.

I firmly believe that AI is less opaque than many other fields we study and technologies we use. As AI gets adopted more in our businesses and lives, explainability will evolve from being a product feature, to an essential expectation. You can’t still find it in many applications? Do not judge the AI future by what it can do on October 2, 2024.. tech needs money and time to develop.

The good news: AI is a tool we created, and if we want, we can make it transparent, accountable, and capable of explaining its decisions. Unlike the natural forces we study, AI is not beyond our control, it is something we designed, and it is within our will power and ability to ensure we understand and improve its behavior.

#ArtificialIntelligence #AI #Explainability #EthicsInAI #TechInnovation #MachineLearning #Transparency #AIEthics #Technology #AIForGood