Understanding AI startups
AI is not for the faint of heart. It’s a frontier tech, with great unknown not only from the technology side, but also product and business ones. Making money in AI (and ONLY selling AI… if you make money selling ads and also sell AI, this article does not apply!) is not something that has been done repeatedly at a scale as of 2020.
I recently came across this nice article by Martin Casado and Matt Bornstein from Andreessen Horowitz. I actually had the chance to chat with Matt a few months ago around the very same topic — challenges and opportunities in AI & SaaS — this post focuses on.
I highly suggest it as a read… to me, gives me a weird feeling of being one of the (unwaware) protagonists of a novel I just found on the Web!… (yes, I am the CEO of an AI startup, Neurala)
The article is a very nicely articulated portrait of the status, economics & prospects of traditional AI startups (and not) delivering AI tech. It’s somehow funny to juxtapose “traditional” and “AI” in the same expression, as this is, in the (natural) intelligence of many, a brand-new tech.
However, despite making money with Deep Learning & Neural Network is a relatively new sport, the underlying math, algorithms, and workflows have remained (virtually) the same since I programmed my 1st Neural Networks 25 years ago.
The post argues that, in summary, that AI startups have the following main challenges, exemplified by:
- CLOUD cost issues: Lower gross margins due to wasting huge chunks of cash in enriching the Amazon & Co. because you need to train on huge amount of data, not just once, but over and over again (see below why);
- CORNER CASES creeping out: Problem in scaling, as every problem is unique & there are edge cases creeping out constantly — so you need to ratrain and pay cloud fees (see above!); and
- OPEN SOURCE competition: A hard time defending their businesses/tech, due to the fact that Academia and Open Source continuously pump out “new AI”.
A nonlinear combination of the CLOUD + CORNER CASES + OPEN SOURCE factors concur to drag gross margins in the 50–60% range (vs the 60–80%+ SaaS benchmarks), which is not fun. While artificially intelligent, these AI companies are business-wise dumber than their supposedly lower-tech cousins.
I tend to agree with the main points of the article, and the analysis. With an important set of twists.
The article applies to “traditional” AI tech & companies. Namely, the traditional workflow of collecting data, training with canonical BackPropagation (iterating trillions of time on an Amazon GPU server…), and redoing this n = infinity # of times because the environmental conditions change..
Yes, the above is a model that will make AI startup vulnerable to the above-described gross margins and challenges.
Fortunately, this modus operandi is not the only one!
Let’s look at the three components, CLOUD, CORNER CASES, and OPEN SOURCE, one by one.
CLOUD: Get an Edge on AWS…
Training AI on the cloud may costs you hundreds of thousands of dollars on cloud fees EACH TIME YOU TRAIN! Traditional Deep Neural Networks (or DNN) training requires to iterate on gazillions of data points. Also, unlike biological learning, you need to store all your training data to retrain from scratch would you need to enrich your AI model. We have also found this to be 100% true and expected: AI should never stop learning (e.g., see this article on applying AI to manufacturing)
The good news is that this is not the only way. At Neurala, we have pioneered a new Deep Learning technology that enables training directly on a compute Edge. Read more here about this tech, called Lifelong DNN (L-DNN). In addition from not requiring a cloud resource to train, L-DNN does not require to store input data, rendering the data/storage cost mute.
Additionally…. you can use tech like L-DNN even if you stick to a cloud training model, with huge benefits! In particular:
- There’s a lot less data needed to train the model to saturation accuracy;
- The amount of compute required per training sample is a lot lower; and
- If there is data drift (I like this term!), only the new data can be added on top of the model rather than repeating the training for the entire dataset.
So, if your AI is old-fashioned DNN, the post applies. If you can train incrementally like as you can with L-DNN, and/or even on-device (or on the Compute Edge) and can discard training data, it does not.
CORNER CASES: never stop learning
Intelligence is so deceptive… as humans, we never stop learning — one of the basic teaching I have learned in my Neuroscience 101… but we do it so simply that we do not realize it happens.
So deceptively and effortless, that few (but, as we see, not a zero set..) scientists have focused on solving this problem, and fewer have turned into tech.
Therefore, if you are using a traditional DNN, as the article eloquently puts it, you will face this challenge due to the presence of data & conditions you have not either accounted for, or data you have not collected before handing $100K to AWS for training:
“Handling this huge state space tends to be an ongoing chore. Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar — two auto manufacturers doing defect detection, for example — may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.”
Again, though, if your technology enables to continuously learn after deployment, then this problem also goes away. See here.
Additionally, and again resonating with the article, what we have invariably seen from our customers is that data is heavily protected, and, simultaneously, useless as conditions (and therefore data) change all the time. E.g., in a manufacturing or retail setting, new products are introduced that differ from your (pre-trained) data, so you can kiss your collected and tagged data goodbye and have to restart from scratch.
However, having a tech, method, and workflow that enables the users of AI tech to quickly upload and train on their own unique data, without having to train in the cloud, and tweak/change/augment that model on the fly at the Edge makes the problem above non-applicable.
OPEN SOURCE: AI does not shield you from having to build a good product!
Yes, new AI models came out all the time. However, contrary to many, I think mostly are marginal improvements & inbreeding of a very selected set of models — and we all know inbreeding leads to sterility. So, I think there is space for proprietary stuff.
However, the post is spot on, can’t’ say it better than how it was put, on the need to productize the right way:
“While it’s not clear whether an AI model itself — or the underlying data — will provide a long-term moat, good products and proprietary data almost always builds good businesses. AI gives founders a new angle on old problems. [….] The opportunity to build sticky products and enduring businesses on top of initial, unique product capabilities is evergreen.”
In summary, great article, great read, which opens up and lays them on the table the main technical, product and business challenges ahead of scalable and profitable AI deployments.
Understanding the problems & asking good questions is 90% of giving a good answer!