Thursday, April 28, 2016

When the rubber ducky hits the road

Paul Leroux
Rubber duckies are born multitaskers. They can serve as bath toys. Or race for charity. Or track ocean currents. Heck, they can even act as crash-test dummies in tiny autonomous vehicles. Don’t believe me? Then check out the following video from MIT’s Computer Science and Artificial Intelligence Laboratory, otherwise known an CSAIL.

Kidding aside, CSAIL has a launched a graduate course on the science of autonomy. This spring, students were tasked to create a fleet of miniature robo-taxis that could autonomously navigate roads using a single on-board camera and no pre-programmed maps. Here is the (impressive) result:



The course looks like fun (and I’m sure it is), but in the process, students learn how to integrate multiple disciplines, including control theory, machine learning, and computer vision. Which, to my mind, is just ducky. :-)


Thursday, April 21, 2016

Autonomous cars that can navigate winter roads? ‘Snow problem!

A look at what happens when you equip a Ford Fusion with sensor fusion.

Paul Leroux
Lets face it, cars and snow don’t mix. A heavy snowfall can tax the abilities of even the best driver — not to mention the best automated driving algorithm. As I discussed a few months ago, snow can mask lane markers, obscure street signs, and block light-detection sensors, making it difficult for an autonomous car to determine where it should go and what it should do. Snow can even trick the car into “seeing” phantom objects.

Automakers, of course, are working on the problem. Case in point: Ford’s autonomous research vehicles. These experimental Ford Fusion sedans create 3D maps of roads and surrounding infrastructure when the weather is good and visibility clear. They then use the maps to position themselves when the road subsequently disappears under a blanket of the white stuff.

How accurate are the maps? According to Ford, the vehicles can position themselves to within a centimeter of their actual location. Compare that to GPS, which is accurate to about 10 yards (9 meters).

To create the maps, the cars use LiDAR scanners. These devices collect a ginormous volume of data about the road and surrounding landmarks, including signs, buildings, and trees. Did I say ginormous? Sorry, I meant gimongous: 600 gigabytes per hour. The scanners generate so many laser points — 2.8 million per second — that some can bounce off falling snowflakes or raindrops, creating the false impression that an object is in the way. To eliminate these false positives, Ford worked with U of Michigan researchers to create an algorithm that filters out snow and rain.

The cars don’t rely solely on LiDAR. They also use cameras and radar, and blend the data from all three sensor types in a process known as sensor fusion. This “fused” approach compensates for the shortcomings of any particular sensor technology, allowing the car to interpret its environment with greater certainty. (To learn more about sensor fusion for autonomous cars, check out this recent EE Times Automotive article from Hannes Estl of TI.)

Ford claims to be the first automaker to demonstrate robot cars driving in the snow. But it certainly won’t be the last. To gain worldwide acceptance, robot cars will have to prove themselves on winter roads, so we are sure to see more innovation on this (cold) front. ;-)

In the meantime, dim the lights and watch this short video of Ford’s “snowtonomy” technology:



Did you know? In January, QNX announced a new software platform for ADAS and automated driving systems, including sensor fusion solutions that combine data from multiple sources such as cameras and radar processors. Learn more about the platform here and here.