After getting a hefty $5.6 billion in fresh capital last week, Alphabet’s autonomous driving unit Waymo is now reported to be valued at more than $45 billion.
People are hopping on LIDAR as the one stop solution to self driving without understanding what it is. Probably for no reason other than the fact that Musk doesn’t like it, therefore the complete opposite must be true!
LIDAR just makes the “easy” problem of self driving more robust. i.e. “Don’t crash into object”
It does nothing for the “hard” problem of self driving. “What does that road construction worker mean when they are waving their hand that way?” or “Is that pedestrian waiting to cross the road or just standing there waiting?”
LIDAR does absolutely fuckall to solve those types of problems. It will basically say, very precisely, “there is a 1.86m tall object 8.54m in front of you, don’t crash into it.” And “there is a 1.71m tall object standing 4.55m to your right, don’t crash into it.”
To solve self driving, vision AI MUST be solved.
Tesla is betting that they will solve the vision problem, and by the time that is solved LIDAR will be redundant. Like a person with perfect vision walking with a blind cane. Yes, the cane can serve as a backup, but… why bother at that point? Although right now Tesla is basically like a very nearsighted person walking without glasses or a cane.
Waymo is using LIDAR so it avoids the catastrophic, headline-making screwups that Tesla does. They just kinda… get stuck, call a human operator to assist, and still use individual human operators to fix the problem. Kinda like the blind man with the cane, who has to call his friend to help out constantly. He might be fine on well travelled routes he’s memorized.
Both are not at the point where they are truly self driving without human oversight. And again, vision AI is the key, not LIDAR. LIDAR is just the trumped up version of the anti-collision radar every car has today.
Like I said, the argument is that if AI vision is actually solved, at that point it’s like walking with perfect vision and a blind cane.
LIDAR’s true strength isn’t even useful for driving at speed. LIDAR is super precise - useful for parking perhaps, but when driving at 50km/h or faster, does it really matter if the object in front is 30.34m ahead or 30.38m?
Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.
I’d like to see redundancy provided by multiple systems that work differently. Advanced high resolution radar, thermal vision, etc. But it still requires vision and AI 100%: the ability to identify what an object is and its likely actions, not simply measure its size and distance.
Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.
The spinning lidar sensors mechanically remove occlusions like raindrops and dust, too. And one important thing with lidar is that it involves active emission of lasers so that it’s a two way operation, like driving with headlights, not just passive sensing, like driving with sunlight.
Waymo’s approach appears to differ in a few key ways:
Lidar, as we’ve already been discussing
Radar
Sensor number and placement: the ugly spinning sensors on the roof get a different vantage point that Tesla simply doesn’t have on its vehicles now, and it does seem that every Waymo vehicle has a lot more sensor coverage (including probably more cameras)
Collecting and consulting high resolution 3D mapping data
Human staff on standby for interventions as needed
There’s a school of thought that because many of these would need to be eliminated for true level 5 autonomous driving, Waymo is in danger of walking down a dead end that never gets them to the destination. But another take is that this is akin to scaffolding during construction, that serves an important function while building up the permanent stuff, but can be taken down afterward.
I suspect that the lidar/radar/ultrasonic/extra cameras will be more useful for training the models necessary to reduce reliance on human intervention and maybe reduce the number of sensors. Not just in the quantity of training data, but some filtering/screening function that can improve the quality of data fed into the training.
People are hopping on LIDAR as the one stop solution to self driving without understanding what it is. Probably for no reason other than the fact that Musk doesn’t like it, therefore the complete opposite must be true!
LIDAR just makes the “easy” problem of self driving more robust. i.e. “Don’t crash into object”
It does nothing for the “hard” problem of self driving. “What does that road construction worker mean when they are waving their hand that way?” or “Is that pedestrian waiting to cross the road or just standing there waiting?”
LIDAR does absolutely fuckall to solve those types of problems. It will basically say, very precisely, “there is a 1.86m tall object 8.54m in front of you, don’t crash into it.” And “there is a 1.71m tall object standing 4.55m to your right, don’t crash into it.”
To solve self driving, vision AI MUST be solved.
Tesla is betting that they will solve the vision problem, and by the time that is solved LIDAR will be redundant. Like a person with perfect vision walking with a blind cane. Yes, the cane can serve as a backup, but… why bother at that point? Although right now Tesla is basically like a very nearsighted person walking without glasses or a cane.
Waymo is using LIDAR so it avoids the catastrophic, headline-making screwups that Tesla does. They just kinda… get stuck, call a human operator to assist, and still use individual human operators to fix the problem. Kinda like the blind man with the cane, who has to call his friend to help out constantly. He might be fine on well travelled routes he’s memorized.
Both are not at the point where they are truly self driving without human oversight. And again, vision AI is the key, not LIDAR. LIDAR is just the trumped up version of the anti-collision radar every car has today.
It’s pretty important to have that “easy” anti-collision problem solved. I’m not quite sure why people think it must be either/or instead of both.
Like I said, the argument is that if AI vision is actually solved, at that point it’s like walking with perfect vision and a blind cane.
LIDAR’s true strength isn’t even useful for driving at speed. LIDAR is super precise - useful for parking perhaps, but when driving at 50km/h or faster, does it really matter if the object in front is 30.34m ahead or 30.38m?
Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.
I’d like to see redundancy provided by multiple systems that work differently. Advanced high resolution radar, thermal vision, etc. But it still requires vision and AI 100%: the ability to identify what an object is and its likely actions, not simply measure its size and distance.
The spinning lidar sensors mechanically remove occlusions like raindrops and dust, too. And one important thing with lidar is that it involves active emission of lasers so that it’s a two way operation, like driving with headlights, not just passive sensing, like driving with sunlight.
Waymo’s approach appears to differ in a few key ways:
There’s a school of thought that because many of these would need to be eliminated for true level 5 autonomous driving, Waymo is in danger of walking down a dead end that never gets them to the destination. But another take is that this is akin to scaffolding during construction, that serves an important function while building up the permanent stuff, but can be taken down afterward.
I suspect that the lidar/radar/ultrasonic/extra cameras will be more useful for training the models necessary to reduce reliance on human intervention and maybe reduce the number of sensors. Not just in the quantity of training data, but some filtering/screening function that can improve the quality of data fed into the training.