What's the thesis here, humans are imperfect drivers and therefore we should accept self driving cars driving into incoming traffic, and if anyone objects, focus on their faults instead?
However, I'm not saying that their licence should be taken away (we've all made mistakes - I should have made that clear), but if that is representative, it is dangerous. I rode motorbikes for many years, and you can spot people who are dangerous very quickly from their 'car body language'. And an error like that in another circumstance could kill someone.
I knew people whose driving had deteriorated like this (my late mother and her friends spring to mind). They refused to accept they were not capable any more, and she used to do things that terrified me. I had discussions (kindly!) about her driving standards and errors and she refused to accept that she was making the errors she was. And they were not as bad as what I saw in the video.
Still seems nuts to me, they're pretending Waymo doesn't already regularly drive without supervision.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
It seems like it would be a really bad idea to put this out there if they know from the data that it will cause lots of accidents.
It's also possible they quietly rely on remote operators to a large extent. The fake robot PR stunt wasn't even that long ago, the company might try the same thing here.
I just don't see how this train of thought makes sense.
https://www.youtube.com/watch?v=1-5s4JlBesc
Nobody's gonna be the one to tell Elon that the stuff isn't ready.
The phenomenon in that video is only possible when management is heavily disconnected from ground truths & data, but that seems the opposite to how Musk's companies are run.
Tesla: a company that doesn't take risks unnecessarily.
Whilst a wheel wobble & veering somewhere it's not supposed to go looks bad, it's very difficult to do worse than the average human driver in terms of safety.
It's statistically unlikely that we'd see an issue like this on the first day of a limited deployment if FSD was hitting those numbers.
They know where they stand when there is a safety driver behind the wheel. I'd expect if that data were really good, they'd be less secretive about it. But still, it says very little about where they stand without that driver.
How is it relevant though?
People have some sort of beef with Elon and thus FSD, but Waymo has made WAY WAY worse mistakes
Waymo seems to be the furthest ahead in my complete outsider opinion. I feel like everyone else is a way distant second.
- Elon, probably
Assuming the Austin robotaxi service continues to go smoothly, I expect Tesla will leapfrog both of them in the next 12 months. Tesla's cost-effective approach (cameras) combined with the fact that they have already scaled vehicle production positions them really well.
Media (and therefore people who trust it too much) will point out places where each of the services goes wrong, but the reality is that likely all of them are already much safer than human drivers if you define "safety" in terms of severe accidents per million miles.
More than a million people die each year from automobile accidents with humans behind the wheel.
Will sound engineering prevail over brain rot in the C-suite? I am skeptical.
There's no chance this includes Tesla with their disengagements (equivalent to the driver passing out) and even then it's only true under the restricted set of conditions these systems actually operate at compared to human drivers.
I'm guessing that they just stop if they ever encounter a situation where they don't know what to do.
Human drivers don't get the luxury of disengaging and having a more skilled driver take over when they're struggling. If Tesla FSD drives for 100km before overwhelming glare causes the system to disengage, that'll go on record as "100km driven without accident", but when the human driver is blinded by the same glare and ends up in an accident 5km later, that'll go on record as 1 accident per 5km driven for the human driver.
Oh my god that’s terrifying if true. I can think of many situations when driving when slamming on the breaks is the absolute wrong choice. Tesla is pushing this out way before it’s safe enough to operate in public.
1. Keep driving in some way
2. Stop
Which one do you think Waymo (or any other system) does when it doesn't understand a situation?
Besides whether the video is biased or not, are we conflating comfort & safety? When a robotaxi jiggles the wheel and makes the car shake, that's clearly terrible for comfort and perhaps brings to mind an inexperienced driver, but that is a personification. Programmers who have ever dealt with debouncing might recognize that it's likely something that could use smoothing to aid with comfort, but not necessarily unsafe.
In any case, Apollo Go is ahead in number of vehicles, cities, rides, and probably also miles. Not that it really matters. Demand is going to lead supply in autonomy for many years to come. These services are not actually going to be in real competition with each other for quite some time.
I was in China (Beijing) a couple of months ago yet didn't have a chance to ride a driverless car there because they were only running them in the deep suburbs (Changping near Lenovo I think?). This is quite different from my experience taking Waymo, where it served the central city just fine (although I'm not sure how Waymo would handle a narrow Hutong alley way like one of the taxi's I took did).
Once I saw the part where the car was being blamed for the pickup location chosen by the user, I really struggle to believe anything that's being said. Felt like they wanted to say it's not good right from the start.
This started with me pointing out that the scale of Apollo Go is greater than Waymo, but then you made it about safety, which is quite a different (but still interesting) topic. Then there's this video that has been brought up in the context of safety, but seems more related to ride comfort. Ride comfort is also interesting, but different than safety.
I'm not sure we can actually have a reasonable conversation without objectively separating such topics.
I haven't looked at what the service area is, but I wouldn't assume that a downtown service area is somehow better or worse than a suburban service area, especially not without defining "better". A suburban-focused service seems like it might actually be a lot more useful than a downtown-focused service.
Back to the original point, you easily see the data yourself. The scale of Apollo Go is slightly larger than Waymo on most metrics that would seem to matter; vehicles, rides, miles, etc.
There's a high floor to Waymo's costs though, and therefore a lower ceiling to their profits. Tesla's aim was always to make it (way) more scalable. If they can match Waymo's performance at Tesla's costs, Waymo goes the way of the dodo pretty quick.
Someone will own the platform that coordinate car movement, and then all cars will need to pay to get onto this coordination platform that will tell each car how to drive, which route to take, etc. Each car know what all the other cars in its area will be doing, so that mass coordination is possible. This is how you can get a completely full highway but all traveling at 65 mph 1 feet away from each other.
I suppose the solution is to seal in all roads then with high walls; what dystopian future.
China is demoing a lot of self driving cars in their suburbs however, so not core Beijing, but out in Daxing or Changping, for example. China could and will mandate self driving cars in dense city cores when they become viable to optimize traffic flow in cities that can’t really fit many more new roads. And Chinese companies are working hard to make mass produced Lidar economical, while America will probably just put high tariffs on that.
Car ownership definitely isn’t going away. Sharing a vehicle with the general public gets old fast if you have the means to avoid it.
There are also many reasons people use cars for more than going from point A to point B on a purely transactional basis. Many professions need to leave things in the car or truck like tools or even your laptop. Having to take everything you own into every building in case the self-driving car gets called back home for service or whatever isn’t going to work.
If the cost of the Waymo Driver hardware falls to the point where it's not prohibitive for the low duty cycle of a private vehicle, I could see that eventually happening.
Then local Gov will charge more to allow non robot cars on the road ( less wear and tear on the roads, fewer roads needed due to more reliably predictable drivers and fewer accidents).
Then lastly manufactures will get to a point that they need to simplify their production range and will pretty much only produce self driving.
lastly, culturally the demand will change. the ipad generation dont want to have learn to drive or have to spend their screen time driving. the damand from them alone will push for self driving cars.
Also authorities are getting giddy when a human tries to drive on railways, so it's effectively illegal to drive in certain places where the ride sharing is the default mode of transportation. It's also very privacy focused, even though there are cameras everywhere you can just buy an anonymous travel pass that you top-up every once in a while. It also allows you to hop between rides for free or at discount.
In the larger cities they often use hyperloop, so you never get stuck in the traffic.
There is a massive long tail of unusual cases that happen outside of major cities that nobody is even really trying to solve at the moment because there isn't much of a profit motive compared to dense cities.
On the right, car ownership is deeply culturally ingrained as a signifier of personal freedom, and the notion of the government requiring that a robot drive you instead would be unpalatable, never mind the notion of no car ownership at all.
On the left, you have a generally hostile attitude towards cars in general as undesirable and to be replaced with public transport. Then there's a growing current of FSD being associated with "big tech" in general and Musk in particular that makes it an even stronger knee-jerk reaction.
This conflation and mixing of words is something which Tesla seems to do intentionally (Full Self Driving, Autonomy, etc), or am I being cynical about that?
Presumably if there were an incident they would be trying to remedy any situation.
The same as a driving instructor. He has a brake pedal, and he can pick up the steering wheel if necessary.
karlgkk•7mo ago
> 10-or-so cars
> human driver behind the wheel (except in this beta)
> invitation based (apparently a very limited audience)
> geofenced not only to a city, but to a small handful of neighborhoods
> early reports suggest disengagement requires remote re-engagement
I hope they get there, more competition in this space is good. But, this is pathetic. They're so far behind Waymo it isn't even funny.
jamessinghal•7mo ago
jen20•7mo ago
jamessinghal•7mo ago
jen20•7mo ago
Hopefully the Tesla app forces Waymo to reconsider their deal with Uber altogether and just run it themselves like they do in San Francisco.
darth_avocado•7mo ago
blackjack_•7mo ago
karlgkk•7mo ago
MetaWhirledPeas•7mo ago
Are they sycophants or are the other sites haters? Take a gander at every tech website's headlines regarding Tesla for the last few years and tell me they aren't following the "if it bleeds, it leads" mantra.
The people invited to preview Robotaxi were not only unedited, they were also allowed to live stream.
trainsarebetter•7mo ago
jfoster•7mo ago
Are you looking at the feeds from the cars to reach 32? Each screen is from the cameras around a single car, so you're actually only seeing 5 cars from that.
_ea1k•7mo ago
If they are still doing this in 3 months, it'd be a bad sign, of course. Their plan is for rapid growth next year.
We'll see if they are able to do that.
__m•7mo ago
smallmancontrov•7mo ago
- Me, if I wanted to be equally ham-handed in throwing the comparison for Tesla.
In reality, we are seeing two bets on two different approaches. Do you scale up supervised driving to maximize data collection/diversity and then go unsupervised? Or do you go unsupervised and then scale up problem solving as you go? The cool thing is that both of these approaches are being tried so we will find out. If you want to place a bet of your own, you can find the casino in your favorite brokerage app!
In any case, the videos of what is possible with supervised FSD are quite amazing, certainly not "pathetic," and what remains to be seen is if they can successfully navigate the supervised->unsupervised jump, which is certainly not trivial.
Arc de Triomphe: https://youtu.be/o2xKpbKZLVA?t=7
Busy China: https://youtu.be/ybBpRN4Hqbc?t=13
Manhattan: https://www.youtube.com/watch?v=qafr3RrJRfU
jfoster•7mo ago
jnsaff2•7mo ago
jfoster•7mo ago