In some spaces we still have rule of law - when xAI started doing the deepfake nude thing we kind of knew no one in the US would do anything but jurisdictions like the EU would. And they are now. It's happening slowly but it is happening. Here though, I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
the issue is that these tools are widely accessible, and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years
due to the current regulatory environment (trump admin), there is no political will to tackle new laws.
> I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
unlike deepfakes, there are extensive road safety laws and civil liability precedent. texas may be pushing tesla forward (maybe partially for ideological reasons), but it will be an extremely hard sell to get any of the major US cities to get on board with this.
so, no, i don't think you will see robotaxis on the roads in blue states (or even most red states) any time soon.
In the specific case of grok posting deepfake nudes on X. Doesn't X both create and post the deepfake?
My understanding was, Bob replies in Alice's thread, "@grok make a nude photo of Alice" then grok replies in the thread with the fake photo.
Where grok is at risk is not responding after they are notified of the issue. It’s trivial for grock to ban some keywords here and they aren’t, that’s a legal issue.
[citation needed]
Historically hosts have always absolutely been responsible for the materials they host, see DMCA law, CSAM case law...
if you think i said otherwise, please quote me, thank you.
> Historically hosts have always absolutely been responsible for the materials they host,
[citation needed] :) go read up on section 230.
for example with dmca, liability arises if the host acts in bad faith, generates the infringing content itself, or fails to act on a takedown notice
that is quite some distance from "always absolutely". in fact, it's the whole point of 230
Getting this to a place where it is better than humans continuously is not equivalent to fixing bugs in the context of the production of software used on phones etc.
When you are dealing with a dynamic uncontained environment it is much more difficult.
Any engineering student can understand why LIDAR+Radar+RGB is better than just a single camera; and any person moderately aware of tech can realize that digital cameras are nowhere as good as the human eye.
But yeah, he's a genius or something.
For me it looks like they will reach parity at about the same time, so camera only is not totally stupid. What's stupid is forcing robotaxi on the road before the technology is ready.
Nah, Waymo is much safer than Tesla today, while Tesla has way-mo* data to train on and much more compute capacity in their hands. They're in a dead end.
Camera-only was a massive mistake. They'll never admit to that because there's now millions of cars out there that will be perceived as defective if they do. This is the decision that will sink Tesla to the ground, you'll see. But hail Karpathy, yeah.
* Sorry, I couldn't resist.
It's far from clear that the current HW4 + sensor suite will ever be sufficient for L4.
Beyond even the cameras themselves, humans can move their head around, use sun visors, put on sunglasses, etc to deal with driving into the sun, but AVs don't have these capabilities yet.
You can solve this by having multiple cameras for each vantage point, with different sensors and lenses that are optimized for different light levels. Tesla isn't doing this mind you, but with the use of multiple cameras, it should be easy enough to exceed the dynamic range of the human eye so long as you are auto-selecting whichever camera is getting you the correct exposure at any given point.
> What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR.
> Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.
> ...
https://news.ycombinator.com/item?id=14600924
Karpathy left in 2022. Turns out that the commenter, Animats, is John Nagle!
Technology is just not there yet, and Elon is impatient.
No reason to assume that. A toddler that is increasing in walk speed every month will never be able to outrun a cheetah.
Waymo could be working on camera only. I don’t know. But it’s not controlling the car. And until such a time they can prove with their data that it is just as safe, that seems like a very smart decision.
Tesla is not taking such a cautious approach. And they’re doing it on public roads. That’s the problem.
That ain't true [1].
Teslas are really cheaply made, inadequate cars by modern standards. The interiors are terrible and are barebones even compared to mainstream cars like a Toyota Corolla. And they lack parking sensors depending on the version you bought. I believe current models don’t come with a surround view camera either, which is almost standard on all cars at this point, and very useful in practice. I guess I am not surprised the Robotaxis are also barebones.
Robotaxis market is much broader than the submersibles one, so the effect of consumers' irrationality would be much bigger there. I'd expect an average customer of the submarines market to do quite a bit more research on what they're getting into.
A small number of humans bring a bad name to the entire field of regular driving.
> The average consumer isn't going to make a distinction between Tesla vs. Waymo.
What's actually "distinct?" The secret sauce of their code? It always amazed me that corporate giants were willing to compete over cab rides. It sort of makes me feel, tongue in cheek, that they have fully run out of ideas.
> they will assume all robotic driving is crash prone
The difference in failure modes between regular driving and autonomous driving is stark. Many consumers feel the overall compromise is unviable even if the error rates between providers are different.
Watching a Waymo drive into oncoming traffic, pull over, and hear a tech support voice talk to you over the nav system is quite the experience. You can have zero crashes, but if your users end up in this scenario, they're not going to appreciate the difference.
They're not investors. They're just people who have somewhere to go. They don't _care_ about "the field". Nor should they.
> dangerous and irresponsible.
These are, in fact, pilot programs. Why this lede always gets buried is beyond me. Instead of accepting the data and incorporating it into the world view here, people just want to wave their hands and dissemble over how difficult this problem _actually_ is.
Hacker News has always assumed this problem is easy. It is not.
That’s the problem right there.
It’s EXTREMELY hard.
Waymo has very carefully increased its abilities, tip-toeing forward little by little until after all this time they’ve achieved the abilities they have with great safety numbers.
Tesla appears to continuously make big jumps they seem totally unprepared for yelling “YOLO” and then expect to be treated the same when it doesn’t work out by saying “but it’s hard.”
I have zero respect for how they’ve approached this since day 1 of autopilot and think what they’re doing is flat out dangerous.
So yeah. Some of us call them out. A lot. And they seem to keep providing evidence we may be right.
Driving around in good weather and never on freeways is not much of an achievement. Having vehicles that continually interfere in active medical and police cordons isn't particularly safe, even though there haven't been terrible consequences from it, yet.
If all you're doing is observing a single number you're drastically under prepared for what happens when they expand this program beyond these paltry self imposed limits.
> Some of us call them out.
You should be working to get their certificate pulled at the government level. If this program is so dangerous then why wouldn't you do that?
> And they seem to keep providing evidence we may be right.
It's tragic you can't apply the same logic in isolation to Waymo.
I don't know what a clear/direct way of explaining the difference would be.
Totally rational.
I think they do. That's the whole point of brand value.
Even my non-tech friends seem to know that with self-driving, Waymo is safe and Tesla is not.
Once Elon put himself at the epicenter of American political life, Tesla stopped being treated as a brand, and more a placeholder for Elon himself.
Waymo has excellent branding and first to market advantage in defining how self-driving is perceived by users. But, the alternative being Elon's Tesla further widens the perception gap.
For those complaining about Tesla's redactions - fair and good. That said, Tesla formed its media strategy at a time when gas car companies and shorts bought ENTIRE MEDIA ORGs just to trash them to back their short. Their hopefulness about a good showing on the media side died with Clarkson and co faking dead batteries in a roadster test -- so, yes, they're paranoid, but also, they spent years with everyone out to get them.
[1] https://www.businessinsider.com/musks-claim-teslas-appreciat...
Are you being sarcastic due to Elon buying Twitter to own/control the conversation? He would be a poster child for the bad actions you are describing.
4x worse than humans is misleading, I bet it's better than humans, by a good margin.
Though maybe the safety drivers are good enough for the major stuff, and the software is just bad enough at low speed and low distance collisions where the drivers don't notice as easily that the car is doing something wrong before it happens.
I know that it is irrational to expect any kind of balance or any kind of objective analysis, but things are so polarized that I often feel the world is going insane.
Yet it is quite odd how Tesla also reports that untrained customers using old versions of FSD with outdated hardware average 1,500,000 miles per minor collision [1], a literal 3000% difference, when there are no penalties for incorrect reporting.
Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.
Given the way Musk has lied and lied about Tesla's autonomous driving capabilities, that can't be much of a surprise to anyone.
Tesla needs their FSD system to be driving hundreds of thousands of miles without incident. Not the 5,000 miles Michael FSD-is-awesome-I-use-it-daily Smith posts incessantly on X about.
There is this mismatch where overly represented people who champion FSD say it's great and has no issues, and the reality is none of them are remotely close to putting in enough miles to cross the "it's safe to deploy" threshold.
A fleet of robotaxis will do more FSD miles in an afternoon than your average Tesla fanatic will do in a decade. I can promise you that Elon was sweating hard during each of the few unsupervised rides they have offered.
No idea how these things are being allowed on the road. Oh wait, yes I do. $$$$
In before, 'but it is a regulation nightmare...'
jackp96•1h ago
I'm curious how crashes are reported for humans, because it sounds like 3 of the 5 examples listed happened at like 1-4 mph, and the fourth probably wasn't Tesla's fault (it was stationary at the time). The most damning one was a collision with a fixed object at a whopping 17 mph.
Tesla sucks, but this feels like clickbait.
malfist•1h ago
bryanlarsen•1h ago
So the average driver is also likely a bad driver by your standard. Your standard seems reasonable.
The data is inconclusive on whether Tesla robotaxi is worse than the average driver.
Unlike humans, Waymo does report 1-4 mph collisions. The data is very conclusive that Robotaxi is significantly worse than Waymo.
giyanani•1h ago
> What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla’s system was at fault, whether the safety monitor failed to intervene in time, or *whether these were unavoidable situations caused by other road users*. Tesla wants us to trust its safety record while making it impossible to verify.
rmi0•1h ago
FireBeyond•1h ago
fabian2k•47m ago
My suspicion is that these kinds of minor crashes are simply harder to catch for safety drivers, or maybe the safety drivers did intervene here and slow down the car before the crashes. I don't know if that would show in this data.