Friday, December 20, 2013

Google's Purchase of Boston Dynamics Could Fast-Track Automated Driving

Google has been working on artificial intelligence (AI) and machine-learning algorithms for many years. By acquiring Boston Dynamics, Google has not only purchased terrifying proto-Terminator robots, but they have gained a plethora of AI that has been specifically developed for robotics.

The broadened catalog of AI algorithms will likely benefit Boston Dynamics legacy programs, but has huge implications for one of Google's most hyped projects: the Google Autonomous Car. The Google Car has already logged millions of miles in traffic situations and appears to be quite reliable on highway driving. This is impressive, but not unique. Many traditional automakers have already commercialized automated driving systems capable of limited self-driving (or partial-automation) in a highway environment. Automated an entire trip is more challenging by an order of magnitude. It's relatively easy to program a car not to crash, but it's not at all clear how to program it to properly navigate congested surface streets around unpredictable traffic, pedestrians, obstructions, etc. What Google and others have realized, is that it is much easier not to program these vehicles, but to train them.

Google and others have become remarkably good at the mechanical aspect of automated driving. Driving is only partially mechanical; it is largely social. We don't realize how complex of a task driving is because we have been evolving algorithms for millions of years that allow us to navigate both our physical and social environments. You can easily program an automated vehicle to obey the rules of the road and not hit things. But unless the vehicle can interact in our social world in a relatively human way, it will become confused and freeze up while trying to navigate in our world.

The conventional wisdom is that Google is using its fleet of automated vehicles to test its automated driving system. This is certainly true, to an extent, but it's not the most important aspect of what Google engineers are doing. Whenever the Google car encounters a unique or unusual situation that the software is not ready for, the test driver must temporarily take over the dynamic driving task. This is important for safety reasons to test in public, of course. But this is important for program development, because the car is watching and learning from the human driver. The aggregated experience of the fleet of Google cars is used to update and refine the controlling algorithms. Software programmers are surely involved in this process, but they don't have to program each unique traffic maneuver from scratch. They have they data regarding what the cars sensors saw in a unique situation, and can see how and why the algorithm became confused, causing the driver to take over. They can use this information, in addition to what the driver did, to increase the capability of the software. Boston Dynamics very likely has a lot of technology that will improve the ability of the Google automated driving system to learn from experience.

A year or so ago, I believed Google was basically playing with toys here. Or maybe the technology would be ready decades from now. Every time I've revised my view on this, it has involved increasing my respect for the Google Car program and subtracting from the time I would predict the technology will be ready. Yeah, OK then... My current prediction: I believe Google will have a consumer-ready driverless (NHTSA level 4 automation) product or service before 2018.



.

1 comment:

Nina M. said...

so what exactly is your blog about? haha. im just curious.