Rapid prototyping as a solution to costly, time-intensive, and ultimately ineffective AI projects
AI as the world knows it is still in its infancy. The workflow that it follows still employs ‘old fashioned’ techniques that imply a slow, painful methodology that goes something like this:
- Gather ginormous buckets of data and annotate them all
- Spend weeks or months building and training AI
- Deploy in a system for testing and see what happens
- Iterate on the steps above
The problem is that steps one and two can take months and create massive costs. Today, enterprises still rely on homemade open-source software tools to collect and curate their data. Not to mention the complete lack of standardization and organizational tools compatible with modern software and project management.
Even worse, building AI with traditional Deep Learning algorithms is painfully slow. It takes a very large number of iterations to get a usable network out.
Sometimes, the results are underwhelming – e.g., robots picking his master’s nose vs items in a bin.
Well, if only we could rapidly prototype in minutes what takes months to do, we could iterate thousands of times as we build, rather than wait months to see an output.
The thing with iteration and AI is very simple and powerful: every version corresponds to a learning episode for the AI application developer. The more time you save, the more you will learn from mistakes and get closer to a working AI.
So, when evaluating your AI solution, be sure that the number one question you ask yourself is: how many times does this process allow me to iterate and learn?
You may end up saving your nose…!