Brain-in-A-Box: a unified Deep Learning framework for mobile robots, drones, automotive
(Keynote at nVidia GTC Washington DC, 2016)
Today’s off-the-shelf low-cost, low-power, parallel processors provide a new wealth of opportunities for enabling on-board computing of robotic vision and navigation algorithm within reasonable size, power envelope, and cost. However, the goal of achieving a truly autonomous portable, low-power, passive (namely, not using active sensors) solution for enabling GNNS-free navigation for ground and aerial robots, as well as self-driving cars, has remained elusive. This gap at the algorithm level is mostly due to the fact that state-of-the art SLAM (Simultaneous Localization and Mapping) solutions are computationally cumbersome and rely on active sensors, or, when passive solutions are employed, those solutions tend to be unreliable.
The talk describes in details a GPU-based brain-inspired (or neuromorphic) software, named NeuroSLAM, that enables mobile robots to autonomously navigate, map, and explore an unknown environment without relying on GPS or active sensors. Behaviors such as exploration of novel environments, memorizing locations of obstacles or objects, building and updating a representation of the environment while exploring it, and returning to a safe location, are all tasks that constitute typical behaviors efficiently performed by animals on a daily basis. NeuroSLAM, developed in a Virtual Environment and then deployed in mobile robots, mimics these abilities in software by leveraging off-the-shelf low-cost cameras and Inertial Measurements Units (gyroscope/accelerometers) and nVidia GPUs.
LINK: GTC Website