Computation and the Future of Robotics

Recent advancements in computation have opened doors not only in software, but also in hardware, making this the most ubiquitous megatrend in robotics.

Moore’s Law has driven computer-related technological improvements for decades. While some think Moore’s Law is slowing down or ending, advancements in graphics processing units (GPUs) and customized (rather than general purpose) chips allow robotics startups to ride the wave of faster and cheaper computation.

Last month, we explored how cheap sensors will allow startups to cost effectively gather data. Most recently, we looked at how a lower cost of storage enables robots to have perfect recall of what they have seen and other collected data. With continuing advancements in computation capabilities, startups can analyze previous data and better understand future environments. These newer chips can then be bundled with custom software to create standalone units that tackle specific problems effectively.

A great example of this is Audi’s new A8 car that will have Level 3 driving autonomy built in and will be powered by Nvidia’s technology. Historically, Nvidia focused on creating graphics chips that catered to gaming and video performance. However, they have expanded their focus, since GPUs are able to more quickly do certain types of math that are useful for deep learning algorithms. Over the last few years, Nvidia stock has skyrocketed as demand for their processors has jumped due to the increased interest in deep learning.

Robotics companies benefit not only from better chip performance, but also from improvements in algorithms. The same deep learning algorithms that Google and Facebook create and perfect to understand their data sets can be used in robotics with little modification.

Here at Lemnos, we are starting to track custom chips for specific purposes. In the past, the chip market was primarily made of central processing units (CPUs) and GPUs because there wasn’t enough demand for custom chips. As the number of devices has jumped in the last 20 years, it now makes financial sense to produce these custom chips. Specifically, we think word recognition chips and computer vision chips will take off.

The word recognition chip would allow you to do on-board voice recognition for dozens, if not hundreds, of words. This would greatly drop the amount of latency required for the robot to understand commands. Currently, when you call Siri or Alexa, it has to reach through to a data center computer to analyze and respond.

Computer vision is another area we believe will be transformed as a result of custom chips. Once custom chips that can do limited image recognition are available widely, startups will be able to create robots that are more power efficient and can understand their world much more quickly. At present, some applications aren’t practical because the environment changes faster than a robot can understand what is happening.

Overall, advances in chip speed and custom capability will continue to enable robots to go places they haven’t been and conduct activities that were previously impractical. These advances impact each of the other four robotics megatrends we have explored (connectivity, sensors, computer vision, and storage). Understanding these five megatrends will help you make calculated projections about what technology will be available in two-to-three years as you start to scale, and help you leverage some of the billions of dollars out there for development.

We’re more excited than ever about the future of robotics and expect that it will become a core focus for us in the foreseeable future.

If you’re working on something interesting or just want to chat, find me on Twitter @nomadicnerd.