frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
100•theblazehen•2d ago•22 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
654•klaussilveira•13h ago•189 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
944•xnx•19h ago•549 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•38 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
48•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
227•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
14•kaonwarb•3d ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•113 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
327•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
487•todsacerdoti•21h ago•241 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
286•eljojo•16h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
409•lstoll•20h ago•276 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
21•jesperordrup•4h ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
87•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
59•kmm•5d ago•4 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
3•speckx•3d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
31•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
250•i5heu•16h ago•194 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
15•bikenaga•3d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1062•cdrnsf•23h ago•444 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
144•SerCe•9h ago•133 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
180•limoce•3d ago•97 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
147•vmatsiiako•18h ago•67 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•9h ago•12 comments
Open in hackernews

When Tesla's FSD works well, it gets credit. When it doesn't, you get blamed

https://electrek.co/2025/11/08/schrodingers-fsd-when-things-go-well-teslas-driving-when-they-dont-you-are/
151•Bender•2mo ago

Comments

fabiensanglard•2mo ago
List of predictions for autonomous Tesla vehicles by Elon Musk: https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
Animats•2mo ago
That needs to be adjusted for Tesla putting the Robotaxi "safety driver" back behind the wheel when the self-driving Robotaxi thing didn't really work.
runako•2mo ago
I can't believe they are giving credit for "robotaxi" launching with human drivers. Yellow Cab had those in the '50s.
cerved•2mo ago
Funnily enough, not an article on grokipedia
neilv•2mo ago
> However, at this week’s shareholder meeting, Musk stated that Tesla may allow “texting and driving” within “a month or two.”

The family of the first person killed by that will know who to sue for a punitive trillion dollars.

twostorytower•2mo ago
Would at least like to use it for stop and go traffic, which is about the only thing I trust FSD for.
kevincrane•2mo ago
Just pull over if you have things to text that can’t wait
SoftTalker•2mo ago
Actually just wait. Pulling over on a busy road creates its own hazards.
danans•2mo ago
> Would at least like to use it for stop and go traffic, which is about the only thing I trust FSD for.

Depends on the type of stop and go driving. Crawling along at 15mph, sure. But the most dangerous driving scenario - whether human or machine is the driver - is a scenario with large variations in speed between vehicles and also limited visibility.

For example suddenly encountering a traffic jam that starts around a blind corner.

jfoster•2mo ago
Dangerous for some humans. If a self-driving system is taking a blind corner, I would expect it to be approaching it at a more appropriate speed.
ckocagil•2mo ago
That's also the most tiresome part of driving and has the least risk due to low speeds. Easy win for FSD. But for all other cases it becomes a complicated ethical question.
dzhiurgis•2mo ago
Sorry, but disabling autopilot/FSD to use your phone is much crazier.
estearum•2mo ago
These are, after all, the only options available to a Tesla driver!
paulbjensen•2mo ago
One reason I wouldn't trust Tesla's FSD: Sun glare: https://www.reddit.com/r/TeslaFSD/comments/1huzgtu/tesla_fsd...

Drivers have tackled this problem by wearing polaroid sunglasses.

I really hope someone asks Tesla how they plan to solve the Sun glare issue.

simondotau•2mo ago
According to users, that issue appears to be solved as of FSD v13. The solution may be reliant on the higher quality camera modules shipped with hardware 4.
Gigachad•2mo ago
Will Tesla recall the old defective cars?
simondotau•2mo ago
For people who paid for FSD, I think they should.
MBCook•2mo ago
So why did it ever ship like that?

Why was that OK? Why was it safe to let people use like that without informing them?

danans•2mo ago
When people are so taken by your brand that they volunteer their money and lives to test your half-baked product, why would you stop them?
terminalshort•2mo ago
Because all cars ship with design flaws. Basically any model of car out there is known for its particular parts that break often.
dawnerd•2mo ago
I have hw4. Not fixed.
jmuguy•2mo ago
I'm sure they'll figure this out right around the time they test the cars in places that have more than one season.
dawnerd•2mo ago
They don’t even work well in California. Driving into sun? Car loses all lane visibility and yells at you. Driving late at night? Can’t detect lanes and complains about blinded cameras.

If only there was some kinda technology that didn’t rely on optics that could see in pitch dark or when the sun is shining.

PunchyHamster•2mo ago
Next up: they put LCD layer with selective dimming over cameras and claim world's first instead of investing in lidar/radar as they should from the start rather than dropping it "coz it is not needed"
lukax•2mo ago
Kind of similar to Agentic coding. The code works? Yay, well done, AI. It't doesn't? You didn't prompt it correctly.
ares623•2mo ago
Saved 20 minutes of your time? Slack goes wild.

Wasted 1 hour each of your 5 co-workers who ended up reviewing unusable slop? Silence.

ben_w•2mo ago
I hear you, but GenAI also gets the opposite fork from people who hate it: It's good result that used GenAI at any point => your prompting and curation and editing is worthless and deserves no credit; it's not good result => that proves AI isn't real intelligence.

As with Marmite, I find it very strange to be surrounded by a very big loud cultural divide where I am firmly in the middle.

Unlike Marmite, I wonder if I'm only in "the middle" because of the extremities on both ends…

coliveira•2mo ago
This scam is true for all AI technologies. It only "works" as far as we interpret it as working. LLMs generate text. If it answers our question, we say that the LLM works. If it doesn't, we say that it is "hallucinating".
adocomplete•2mo ago
How is that any different from Googling something and believing any of the highly-SEO optimized results that pollute the front page?
coliveira•2mo ago
That's the point, nobody really believes there is an intelligence generating Google results. It is a best-effort based engine. However, people have this belief that ChatGPT has somehow some intelligent engine generating results, which is incorrect. It is only generating statistically good results; if it is true or false depends on what the person using it will do with the results. If it is poetry, for example, it is always true. If it is how to find the cure for cancer it will with very high probability be false. But if you're writing a novel about a scientist finding a cure for cancer, then that same response will be great.
CamperBob2•2mo ago
So "statistics" are enough to take gold at IMO?
ares623•2mo ago
Search engines didn't need $500B and growing in CAPEX.
pmarreck•2mo ago
It's not a scam because it does make you code faster even if you must review everything and possibly correct (either manually or via instruction) some things.

As far as hallucinations go, it is useful as long as its reliability is above a certain (high) percentage.

I actually tried to come up with a "perceived utility" function as a function of reliability: U(r)=Umax ⋅e^(−k(100−r)^n) with k=0.025 and n=1.5 is the best I came up with, plotted here: https://imgur.com/gallery/reliability-utility-function-u-r-u...

paul7986•2mo ago
Im sorta beginning to think some LLM/AI stuff is the Wizard of Oz(a fake it before you make it facade).

Like why can an LLM create a nicely designed website for me but asking it to do edits and changes to the design is a complete joke. Lots of the time it creates another brand new design (not what i asked all) and it's attempts at editing it LOL. It makes me think it does no design at all rather it just went and grab one from the ethers of the Internet acting like it created it.

coliveira•2mo ago
> it does no design at all rather it just went and grab one from the ethers of the Internet

Bingo. It just "remembers" one of the many designs it has seen before.

ryandrake•2mo ago
"Trust me, bro, it works" has become kind of a theme song to these guys.
whynotmaybe•2mo ago
Apple did it successfully, it's a marvelous ecosystem, but when there's an issue, it's because you're not holding it the correct way.
JumpCrisscross•2mo ago
Agentic code doesn’t kill people. Tesla drivers who think they own a Waymo do.
reppap•2mo ago
We're just waiting for AI code in a Therac-25 type device.
blibble•2mo ago
isn't that the entire "AI" experience, in a nutshell?
jasonsb•2mo ago
No. There's a critical difference between AI as a tool and AI as a product. When you use a tool, the responsibility to verify the output is yours. But when you pay for a product like Tesla's Full Self-Driving, the company assumes liability for its performance. If FSD causes harm, Tesla must be held responsible.
weirdindiankid•2mo ago
I’m still perplexed by the utter lack of regulatory action in both the U.S. and Canada with FSD. Its inconsistency and Tesla’s “seat of the pants” approach to safety have gotten enough people injured to where I’d have expected someone to take notice by now. Don’t get me wrong, when it works, it’s amazing but there’s no way to reliably establish that it works on the same route twice.
jonway•2mo ago
This is a fairly weak shower thought, but it was interesting to note (anecdotally) people's propensity to rigidly prioritize safety in some, but not all circumstances.

For example, I've had numerous conversations where people will point to safety rating in vehicles to defend their purchasing decisions. Its simple to understand really, I want the safest car for my family/child etc, that is why I refuse to buy an older, used vehicle or prefer a sedan over SUV. Safety becomes cover for preference and defending trends like expanding pickup truck sizes since the 2000s while there is no safety rating or even objective measure of the efficacy of these self-driving systems.

Hopefully I haven't wasted your time, its just a psychological trend that I think exists.

paulryanrogers•2mo ago
Shame they don't reduce safety ratings for the bigger vehicles with worse visibility that are more likely to kill pedestrians/bicyclists and cause more respiratory problems than smaller vehicles.
jonway•2mo ago
Yes, its maybe an "internal" safety rating? Since the ratings are from a standardized test usually conducted on a powered rail trolley, I've often wondered if the safety rating would be different if the testing considered those other factors, like crashing into a Civic vs a Chevy truck, or vice versa.

I have training and a great deal of experience with vehicle emissions systems, but not medical training beyond first aid and CPR. I think that mostly the respiration problems are caused by particulates volatile organic compounds, oxides of nitrogen, and sulfur dioxide(which is quite nasty), mostly from diesel emissions. The catalytic converters and emission controls prevent nearly all VoC and like %90 of NOx, so the effect from passenger vehicles seems pretty small there. They do nothing to eliminate SO2, which is why we mandate DEF (diesel emissions fluid) on some diesels.

Brake dust and tire wear is another smaller contributor, though

PunchyHamster•2mo ago
It's weird, it's essentially allowing unlicensed driver on roads
terminalshort•2mo ago
Show me actual stats on the safety vs human drivers and I may actually care. The lack of them makes me think that it isn't actually that dangerous compared to humans.
weirdindiankid•2mo ago
Agreed. Unfortunately, Tesla’s quite cagey with publishing safety data, far as I can tell. For what it’s worth, both HW4 and HW3 with FSD 13.x and 12.x have put me in dangerous situations enough times to where I no longer trust it.
terminalshort•2mo ago
I would like to see that data too, but seeing how unhinged the media reaction is when even one person is killed in a Tesla (or even when a cat was killed by a Waymo) I do understand why they don't release it.
Animats•2mo ago
As time goes on, Tesla's fiasco becomes more and more embarrassing. Waymos are all over the place in the cities they serve, doing pretty much what they're supposed to do. Nuro has some fully autonomous vehicles running around. Baidu's Apollo Go is deployed in 16 cities in China, although they use remote driving as a backup.

Tesla, though, is still hyping a technology that seems to have maxed out years ago.

brianwawok•2mo ago
And yet it drives me on 2000 mile car trips without touching anything. If you ignore the hype and look at the actual product, it’s fine.
paulryanrogers•2mo ago
Ancedotes are nice. Miles per disengagement stats would be better.
buran77•2mo ago
That's not an anecdote, it sounds like an exaggeration bordering flat out lying. A 2000 miles trip "without touching anything" to drive the car is statistically impossible for any reasonable drive (e.g. not endless straight lines on an Australian highway), especially for a Tesla famously known for needing interventions often. Even more advanced autonomous driving systems are far too limited to take arbitrary 2000 miles trip with zero human assistance.
madamelic•2mo ago
13 is quite good. 14 is even better.

2,000 may be stretching it but it is possible if the driver is trusting enough. Personally many of my disengagements isn't because it is being dangerous, but just sub-optimal such as not driving as aggressive as I want to, not getting into off-ramp lane as early as I like, or just picking weird navigational choices.

Trying to recall but I haven't had a safety involved disengagement in probably a few months across late 13 and 14. I am just one data point and the main criticism I've seen from 14 is: 1) getting rid of fine speed controls in favor of driving style profiles 2) its car and obstacle avoidance being overtuned so it will tap the brakes if, for instance, an upcoming perpendicular car suddenly appears and starts to roll its stop sign.

Personally, I prefer it to be overly protective albeit turn it down slightly and fix issues where it hilariously thinks large clouds of leaves blowing across are obstacles to brake for.

buran77•2mo ago
> 2,000 may be stretching it but it is possible if the driver is trusting enough.

Yeah, trust and a lot of creative accounting of what constitutes successfully driving by itself for that long.

Would you put your child in one and let it cross a large city 100 times unattended?

simondotau•2mo ago
Driver profiles seem like a terrible answer to the question of choosing a maximum speed, both for the driver of the vehicle, and for Tesla — because it shifts the understanding of the car's behaviour from the driver to Tesla. I think it's insane that Tesla would take that risk.

IMHO, it's okay for the driver profiles to affect everything other than max speed, including aggressiveness of acceleration and propensity to change lanes. But since exceeding speed limits is "technically" breaking the law, the default behaviour of FSD should be to strictly obey speed limits, and drivers should be given a set of sliders to manually override speed limits. Perhaps like a graphic EQ with sliders for every 10 MPH where you can manually input decide how many MPH over that limit is acceptable.

This would be an inelegant interface, and intentionally so. Drivers should be fully in control of the decision to exceed the speed limit, and by how much. FSD should drive like a hard-nosed driving instructor unless the driver gives unambiguous permission to do otherwise.

[0] Note that I am describing this based on my understanding of the US environment. I am Australian, and our speed limits are strictly enforced at the posted speed, without exception. On any road, you should expect a fine if going 3—6 km/h [2—4 MPH] and caught by a fixed or mobile camera. This applies literally anywhere, including highways. By contrast in the USA, I understand that 5—10 MPH on highways has been socially normalised, and law enforcement generally disregards it.)

PunchyHamster•2mo ago
That also doesn't matter much as highways pump those stats up.
bastawhiz•2mo ago
I can imagine it doing fine on highways for a thousand miles. FSD has literally never managed to complete a trip involving city driving for me without disengaging or me having to stop it from doing something illegal. I'm not sure how many attempts I'm supposed to give it. Hell, autopilot doesn't even manage to consistently stay safely in a lane on I-40 between Durham and Asheville.
vessenes•2mo ago
have you tried fsd 13/14 yet? I just upgraded to 13 from 12 and it's a massssive improvement. Not sure what 14 s like. It's definitely added a 9 in the last year.
buran77•2mo ago
> have you tried

I've heard this so many times it's starting to be a meme. The system was claimed to be very capable from the beginning, then every version was a massive improvement, and yet we're always still in very dangerous, and honestly underwhelming territory.

Teslas keep racking up straight line highway miles where every intervention probably counts at most as 1 mile deducted from the total in the stats. Have one cross a busy city without interventions or accidents like a normal human driver is expected to.

terminalshort•2mo ago
It was very capable, and each version has been a big improvement. The first time I rode in a Tesla with FSD back in 2017 I was shocked by how good it was. Self driving tech has advanced so fast that we forget it was considered next to impossible even 15 years ago. You are judging past tech by 2025 standards.
buran77•2mo ago
Novelty is enough to look like "shockingly good". You were comparing "no self driving" to "some self driving". A jump from 0 to something always seems big. Standard driver assists were also impressive when they appeared on cars. In the meantime Tesla still makes a lot of claims about safety but doesn't trust the FSD enough to publish transparent, verifiable stats. That speaks louder than any of our anecdotes.

> You are judging past tech by 2025 standards.

That's very presumptuous of you. Every single person I know driving a Tesla told me the FSD (not AP) is bad and they would never trust it to drive without very careful attention. I can tell Teslas on FSD in traffic by the erratic maneuvers that are corrected by the driver, and that's a bad thing.

terminalshort•2mo ago
> Every single person I know driving a Tesla told me the FSD (not AP) is bad

I really don't believe this because everyone I know who drives a Tesla tells me the opposite. I tend to think this is an artifact of people who just irrationally hate Tesla because IRL every negative thing I hear about Teslas comes from people who don't own the cars and hate Elon Musk.

> they would never trust it to drive without very careful attention

Of course, because the product is not designed to drive without human supervision.

> I can tell Teslas on FSD in traffic by the erratic maneuvers that are corrected by the driver, and that's a bad thing.

I don't believe you actually can because I don't notice any difference in the quality of driving between Tesla's or any other car on the road. (In fact the only difference I can notice between drivers of different cars is large trucks). So, again, I write off such statements as more of the same emotionally driven desire to see a pattern were there isn't one.

buran77•2mo ago
> I really don't believe this

> I don't believe

> this is an artifact of people who just irrationally hate Tesla

> more of the same emotionally driven desire to see a pattern

Don't you find it curious that every opinion you don't like must be from irrational people hating Tesla, but opinions you do like are all rational and objective? It's as if we didn't define the sunk cost fallacy for exactly this. You're a rational person, if Tesla was confident in the numbers wouldn't we have an avalanche of independently verifiable stats? Instead we're here playing this "nuh-uh" games with you pretending you're speaking with an authority nobody else has. Does any other company go to such lengths to bury the evidence of their success? The evidence that supports their claims?

And of course I can tell FSD drivers, literally nobody else on the street will so often brake hard with absolutely no reason, or veer abruptly then correct and straighten out so hard it wobbles, both on highways and in the city. If it's not the car then it must be the drivers but they wouldn't make such irrational moves.

P.S. The internet is full of video evidence of FSD making serious and unjustified mistakes. Every version brings new evidence. How do you explain those? Irrational haters inventing issues? Car misbehaving only for them? Because you see, even if you film 10 times and get the mistake only once it's still very serious.

cyberax•2mo ago
> I really don't believe this because everyone I know who drives a Tesla tells me the opposite.

I mean, I love my Tesla autopilot. It made my cross-country trips so much more enjoyable. I have several thousand hours on autopilot at this point.

That being said, I don't use it on regular city streets. Because it's just bad, in all kinds of ways. "Full self-driving" it is not.

terminalshort•2mo ago
Yeah, that's the type of feedback I absolutely do believe. Sounds like something someone would say about their car to me IRL. That's basically the standard I apply to internet comments.
bastawhiz•2mo ago
For what it's worth, that was not my experience. 12 didn't fix all of the issues I'd heard about its predecessor. 13 didn't fix all the issues I experienced with 12. "Better" isn't enough, it needs to be so good that every issue with the previous generation is resolved. It's never been close to that.
ryandrake•2mo ago
I'm also sick of hearing "have you tried?" And also "it's really improving!"

Maybe the manufacturer should try the next version. And test it. And then try the next version. And test it. And then continue until they have something that actually works.

bastawhiz•2mo ago
I tried 12 and 13. My car hasn't gotten 14 and I'm not interested in finding out if it'll get it, because at this point I'm simply not interested even if it's decent.

And there's no guarantee it'll be meaningfully better, if I'm being honest. Why would I believe that all the issues are fixed? For me to be interested, every single problem that I'd encountered needs to be resolved. Why would I even consider accepting anything less? It's not "partial" self driving. An incremental improvement is useless.

vessenes•2mo ago
Meh. I do a lot of beta testing, and enjoy it. I think it's fun to watch the evolution of this tech over the last seven or eight years. But you don't have to feel that way.
piva00•2mo ago
Has everyone else you encounter on the road agreed to participate in your beta testing?

That's the crux of it, and where traffic education mostly fails in the USA: the lack of consideration that the road is about safety for all users involved, the notion you are operating dangerous machinery around other people.

vessenes•2mo ago
The Tesla on fsd13 is considerably more courteous and safer than at least the bottom quintile of drivers. It’s much safer than my teenage drivers were in my family. Failure modes in city driving tend to be stopping, not plowing through pedestrians
don_neufeld•2mo ago
Not sure if it’s evident in broader statistics yet, but I think that because Tesla got the early adopter market (tech savvy people), they are now losing that same market first.

I had a party at my house a couple months ago, mostly SF tech people. I found the Tesla owners chatting together, and the topic was how much FSD sucks and they don’t trust it.

I asked and no-one said they would buy a Tesla again. Distrust because they felt suckered by FSD was a reason, but several also just said Elon’s behavior was a big negative.

pmarreck•2mo ago
The fact that Elon has blown the San Franistan EV market should surprise absolutely no one.
bastawhiz•2mo ago
I own six EVs (three cars, one of which is a Tesla, and three motorcycles). My first EV was my Tesla.

We're on the cusp of trading the Tesla in for a Rivian most likely. I should be Tesla's target customer, but instead I'm exactly who you described:

- I don't like the brand. I don't like Elon. I don't like the reputation that the car attaches to me.

- I don't trust the technology. I've gotten two FSD trials, both scared the shit out of me, and I'll never try it again.

- I don't see any compelling developments with Tesla that make me want to buy another. Almost nothing has changed or gotten better in any way that affects me in the last four years.

They should be panicking. The Cybertruck could have been cool, but they managed to turn it into an embarrassment. There are so many alternatives now that are really quite good, and Tesla has spent the last half a decade diddling around with nonsense like the robot and the semi and the Cybertruck and the vaporware roadster instead of making cars for real people that love cars.

georgemcbay•2mo ago
> They should be panicking.

I'm sure they would be if the stock price had ever showed any signs of being based in reality.

But for now Elon can keep having SpaceX and xAI buy up all the unsold Teslas to make number go up.

If that ever stops working, just spin up a new company with a hyper-inflated valuation and have it acquire Tesla at some made up number. Worked for him once, why not try it again.

And at this point he can get even fraudier, with the worst possible realistic outcome being that he might get forced to pay a relatively small bribe and publicly humiliate himself for Trump a bit.

But there's really no more consequences to any sort of business fraud (for now) as long as you can afford the tribute.

#WorldLibertyFinancial

don_neufeld•2mo ago
The execution on the roadster baffles me.

IIRC the deposit was 250K, and I know people who signed up on the first day. Can you imagine a more dedicated fan?

How do you not deliver to that group? How big an own-goal is that?

dzhiurgis•2mo ago
Their ~mission was cheap cars for masses. There are plenty of high end EVs out there. It's nuts when smart people compare 7 year old 35k Tesla to a brand new 80k Polestar or 100k Lucid.
nofriend•2mo ago
50k. point definitely remains
don_neufeld•2mo ago
I believe the Founders series was 250K deposit.

https://teslamotorsclub.com/tmc/threads/202x-roadster-delay-...

tim333•2mo ago
The other thing is various other companies built pretty much what Tesla was promising like the Rimac Nevera and the Yangwany U9, showing it was quite doable if Tesla had put some enthusiastic engineers on it and said go do it?
dzhiurgis•2mo ago
> instead of making cars for real people that love cars.

Whoosh. They've been saying Tesla is an AI company for nearly a decade. AI has been propping up entire US economy for last few years. EV bandwagon has left long time ago.

Saying all that I wouldn't mind even cheaper Tesla - small screen, 1 camera instead of 11, fully offline, fully stainless steel, fully open source - basically minimally tech and maximally maintainable and maximum longevity.

seanmcdirmid•2mo ago
> Saying all that I wouldn't mind even cheaper Tesla - small screen, 1 camera instead of 11, fully offline, fully stainless steel, fully open source - basically minimally tech and maximally maintainable and maximum longevity.

What you describe would probably cost more money, not less. The market is small and analog tech is actually more expensive to produce with than digital tech.

dzhiurgis•2mo ago
I said nothing about analog.
seanmcdirmid•2mo ago
> I said nothing about analog.

But:

> basically minimally tech and maximally maintainable and maximum longevity.

Kind of implies it.

Tech is used to lower prices, not raise them, if you want minimize tech, and you want to make it as maintainable by end user or relatively cheap mechanics, and you want it to last as long as possible, that is going to cost a lot. Or you basically want a Lada Laika, the old ones, that could be repaired super easily. Anything with microchips is going to suffer if those chips die, and they aren't going to be easy to repair.

dzhiurgis•2mo ago
> Anything with microchips is going to suffer if those chips die, and they aren't going to be easy to repair.

AFAIK Tesla is already moving towards that direction with unboxed manufacturing - it's where same chip can be either window controller or brake controller. Having single chip + open source firmware would eliminate this issue.

bastawhiz•2mo ago
They make cars that mostly do their job. They don't make AI that does its job. They're not an AI company, they're a car company pretending they're not a car company.
rsynnott•2mo ago
The astonishing thing is that they haven't released a new car in six years and _do not appear to have one in the works_ (unless you count the Model Y facelift, but really that's pushing it). I think that's pretty much unheard of for an active car manufacturer over the last few decades.

Like, what are they _doing_? Do they still have R&D at all?

redserk•2mo ago
I partially agree. FSD seems fine to me but I wouldn’t buy a second Tesla. Tesla seems to have stopped caring about being a car company that caters to nerds/tech enthusiasts.

Mine has been an extremely well done vehicle and I was (and kind of am) bullish on FSD as a driver assistance technology, but a car is a 6-7 year investment for me and I have big doubts about their direction. They seem to have abandoned the idea of being a car company, instead chasing this robotaxi idea.

Up until 2023/2024 was fine for my 6-7 year car lifecycle. Tesla was really cool when they let you do all sorts of backwards-compatible upgrades, but they seemed to have abandoned that.

I’ve found it incredibly disappointing seeing their flailing direction now.

Rivian seems to still have a lot of the magic that Tesla had. They’re definitely a strong contender for my next vehicle in a year or two.

don_neufeld•2mo ago
Rivian is definitely up and coming. The increase of them around my neighborhood has been very noticeable over the past 12 months.
vessenes•2mo ago
Sorry but while I am a happy Waymo user, this is overblown and just incorrect. Waymo has datacenter oversight. (which, who cares, the product is great).

Tesla FSD 12 -> 13 was a massive jump that happened earlier this year. 14 is still rolling out.

Testing out 13 this weekend, it drove on country roads, noticed a road closure, rerouted, did 7 miles on non divided highways, navigated a town and chose a parking space and parked in it with zero interruptions. It even backed out of the driveway to start the trip. I didn't like the parking job and reparked; other than that, no hands, no feet, fully autonomous. Unlike 12, I had no 'comments' on the driving choices - rotaries were navigated very nicely, road hazards were approached and dealt with safely and properly. It was genuinely good.

Dislike Elon all you want, but Tesla FSD is improving rapidly, and to my experienced eyes adding 9s. Probably two or three more 9s to go, but it's not a maxed out technology.

energy123•2mo ago

  > It was genuinely good.
You lack data to draw this conclusion. The most important factor is deaths per mile, which is sparse, so it requires aggregating data from many drivers before you have enough statistical power.
vessenes•2mo ago
Qualitative reactions matter. I didn't say it was genuinely safe. I can say that it was smooth, reacted better than most adult drivers to the situations I witnessed, and was significantly better at those things than the prior version. Safety is sparse, but steering, acceleration, decision making happen at like 60hz, and I would say I can make a qualitative judgment on it.
culi•2mo ago
Tesla's FSD has indeed made significant improvements in the past year (still way behind where it was promised to be even half a decade ago), but they are FAR from being able to operate an actual robotaxi service. Austin is an embarrassment. It seems that tesla believes they can make more money on fooling investors than they can on any core business model
torginus•2mo ago
I just had a thought - a Waymo car costs $200k (maybe more) from a quick google search. YoY returns of $200k on S&P are about 10%, while an Uber driver takes home about $40-$50k - so in terms of cost, they are about 2x-2.5x of each other, with the Waymore likely needing expensive maintenance/support infrastructure, bringing the total much closer.

Which means if Tesla can really build that Cybercab - with an underpowered motor, small battery, plastic body panels, just cameras (which I think they promised to sell under $20k) - they'll be able to hit a business expense level and profitability that Waymo will only be able, in say, 10 years.

Even if you don't want to talk about non-existing hardware, a Model 3's manufacturing cost is surely much lower than a Waymo.

Once (if) they make self driving work at any point in time before Waymo gets to the same level of cost - they'll be the more profitable business.

Not only that, they'll be able to enter markets where the cost of Waymo and what you can charge for taxi rides is so far apart that it doesn't make sense for them - in this sense, they'll have a first mover advantage.

cyberax•2mo ago
Waymo cars are basically priceless at this point. As in: the car cost doesn't matter. They've so far spent multiple times their fleet's costs on R&D. The fact that they're getting some pocket cash from paid fares is inconsequential for their bottom line.

Any realistic mass deployment will use cheaper cars, more suitable for taxi service.

simondotau•2mo ago
The cost isn't an issue at their current scale, perhaps not even one order of magnitude larger, because today's fleet is as much a devkit as it is a consumer product. But that strategy only takes them so far. Cost is a dangerous impediment to scale, and it will be their albatross long before their business model has a chance of becoming profitable, let alone cumulatively profitable.
cyberax•2mo ago
Other companies have this covered, I think. Zoox cars in Las Vegas are probably a good prototype for more realistic production vehicles.
simondotau•2mo ago
Not to mention, Waymo is moving from Jaguar to Zeekr for its next-gen fleet, meaning 100% import tariffs on those Chinese-built base vehicles before it even begins the expensive retrofit process.

The core problem with Waymo’s model is its lack of (economically rational) scalability. Shipping finished vehicles to a second facility to strip and rebuild them as robotaxis is inherently inefficient, and cannot be made efficient by scaling up. To achieve meaningful volume, Waymo would need to contract an automaker to build finished robotaxis, ideally domestically or where tariffs are sufficiently low.

Obviously Tesla's solution only works if their vision-only strategy bears fruit. Assuming it does (a wildly controversial assumption in this space, but let's go with it for now) the economics are utterly wild. It's difficult to imagine how any competitor could come close to competing on cost or scale. And that's assuming the Model Y, ignoring the as-yet hypothetical Cybercab.

I suppose Alphabet could buy the corpse of Canoo. I suspect that if it had a plausible manufacturing ramp, they would have been snapped up quickly. Automotive-scale manufacturing is a crucible, and it destroys most who attempt it. In fact most die long before they begin frfr.

Animats•2mo ago
> Not to mention, Waymo is moving from Jaguar to Zeekr for its next-gen fleet, meaning 100% import tariffs on those Chinese-built base vehicles before it even begins the expensive retrofit process.

Isn't Waymo going with Ioniq 5 vehicles built for them in the US by Hyundai, with all the vehicle sheet metal and other mods installed at the factory? That was the story a few months ago.

whatever1•2mo ago
It's like being a loan guarantor. If the loan gets paid off, the lender/borrower get the benefits. You get 0. If the loan goes delinquent, you are on the hook.
CalChris•2mo ago
I love TACC. I occasionally use AutoSteer. But I tried FSD and it was much more stressful than driving.
pmarreck•2mo ago
Sounds like a problem with all of AI.

Having driven Tesla FSD and coded with Claude/Codex, it suffers from the exact same issues- Stellar performance in most common contexts, but bizarrely nonsensical behavior sometimes when not.

Which is why I call it "thunking" (clunky thinking) instead of "thinking". And also why it STILL needs constant monitoring by an expert.

neuroelectron•2mo ago
I mean, obviously. A significant part entire economy runs on plausible deniability. Amazon is a great example of this, by keeping similar products or different vendors in the same bin. Regulations are to keep out competitors, etc.
sciencesama•2mo ago
When stock goes up musk gets money, when it goes down people who short will be the reason !
whoisthemachine•2mo ago
Tesla's FSD will be good enough when you get an insurance discount for using it.
sabareesh•2mo ago
Technically you kind of get this in Nevada when using Tesla insurance and if you drive 100 % FSD. If you drive manually you are pretty much doxed for random Front collision Warning which is super sensitive
whoisthemachine•2mo ago
That does sound like a punishment for not using it. My statement was more whether they actively sell you a discount for using it.
romaaeterna•2mo ago
> So, for example, when a Florida driver on Autopilot drops his phone and blows through a stop sign, hitting a car which then hits two pedestrians, killing one, Tesla will claim “this driver was solely at fault.” In that case, a judge agreed that the driver was mostly at fault, but still assigned 33% of blame to Tesla, resulting in a $243 million judgment against the company.

His foot was on the gas though

Looking at this author's other articles, he seems more than a bit unhinged when it comes to Tesla: https://electrek.co/author/jamesondow/ Has Hacker News fallen for clickbait? (Don't answer)

johnnyApplePRNG•2mo ago
A couple of facts on the Florida case: it was a jury verdict, not a judge. The jury found Tesla 33% at fault for a 2019 Key Largo crash. Damages were $129M compensatory (Tesla responsible for 33% of that) plus $200M punitive, for $243M total.

The driver admitted he looked down after dropping his phone and blew a stop sign; Tesla argues his foot was on the accelerator, but the jury still assigned partial fault because Autopilot was allowed to operate off limited-access highways and the company didn’t do enough to prevent foreseeable misuse. The driver had already settled separately.

rtpg•2mo ago
is there any blame to be associated to Tesla for its feature? what's the right percent for you? 20%? 10%? 5%? 0%?

If the wheels of the car fell off, whould Tesla have any blame for that? If we had laid wires all along the road to allow for automatic driving, and Tesla's software misread that and caused a crash, would it be to blame?

When is Autopilot safe to use? Is it ever safe to use? Is the fact that people seem to be able to entirely trick the Autopilot to ignore safety attention mechanisms relevant at all?

If we have percentage-based blame then it feels perfectly fine to share the blame here. People buy cars assuming that the features of the car are safe to use to some extent or another.

Maybe it is just 0%. Like cruise control is a thing that exists, right? But I'm not activating cruise control anywhere near any intersection. Tesla calls their thing autopilot, and their other thing FSD, right? Is there nothing there? Maybe there is no blame, but it feels like there's something there.

romaaeterna•2mo ago
0%. This is entirely on the driver. He's someone who should spend a few years in prison, and then never be allowed to have a license again.

A foot on the gas overrides braking on autopilot and causes it to flash up a large message up on the screen that "Autopilot will not break / Accelerator pedal is pressed"

progbits•2mo ago
Yet almost any other car made after that tesla (and much cheaper) will automated break if it's about to hit something, no AI involved, just radar obstacle detection.
ninalanyon•2mo ago
> any other car made after that tesla (and much cheaper) will automated break if it's about to hit something,

My 2015 Tesla S brakes if it detects something in its path using radar and usually correctly identifies the object type (truck, car, motorcycle, cyclist, pedestrian) using the camera.

progbits•2mo ago
Good for you, but

1) didn't they drop the radar?

2) clearly didn't work in this case

jfoster•2mo ago
Also check the author's Bluesky: https://bsky.app/profile/jamesondow.bsky.social

It can't be healthy to be so obsessed with something/someone you dislike.

RajT88•2mo ago
The modern, "you're holding it wrong" approach to product flaws.

And by the way - I have heard big tech folks repeat that phrase, not really understanding the moral of that Steve Jobs story.

tippytippytango•2mo ago
Ultimately, anecdotes and testimonials of a product like this are irrelevant. But the public discourse hasn't caught up with it. People talk about it like it's a new game console or app, giving their positive or negative testimonials, as if this is the correct way to validate the product.

Only rigorous, continual, third party validation that the system is effective and safe would be relevant. It should be evaluated more like a medical treatment.

This gets especially relevant when it gets into an intermediate regime where it can go 10,000 miles without a catastrophic incident. At that level of reliability you can find lots of people who claim "it's driven me around for 2 years without any problem, what are you complaining about?"

10,000 mile per incident fault rate is actually catastrophic. That means the average driver has a serious, life threatening incident every year at an average driving rate. That would be a public safety crisis.

We run into the problem again in the 100,000 mile per incident range. This is still not safe. Yet, that's reliable enough where you can find many people who can potentially get lucky and live their whole life and not see the system cause a catastrophic incident. Yet, it's still 2-5x worse than the average driver.

irjustin•2mo ago
> Only rigorous, continual, third party validation that the system is effective and safe would be relevant. It should be evaluated more like a medical treatment.

100% agreed, and I'll take it one step further - level 3 should be outright banned/illegal.

The reason is it allows blame shifting exactly as what is happening right now. Drivers mentally expected level 4 and legally the company will position the fault, in as much as it can get away with, to be on the driver, effectively level 2.

atlex2•2mo ago
They're building on a false premise that human equivalent performance using cameras is acceptable. That's the whole point of AI - when you can think really fast, the world is really slow. You simulate things. Even with lifetimes of data, the cars still will fail in visual scenarios where error bars on ground truth shoot through the roof. Elon seems to believe his cars will fail in similar ways to humans because they use cameras. False premise. As Waymo scales, human just isn't good enough, except for humans.
irjustin•2mo ago
So, I agree with what you're saying, but that doesn't matter.

The legal standing doesn't care what tech it is behind it. 1000 monkeys for all it matters. The point is level 3 is the most dangerous level because neither the public nor the manuf properly operates in this space.

atlex2•2mo ago
Yeah - Tesla is in a weird level 2-4 space. They've managed to shunt liability onto their customers until now..
terminalshort•2mo ago
If there was actually a rate of one life threatening accident per 10,000 miles with FSD that would be so obvious it would be impossible to hide. So I have to conclude the cars are actually much safer than that.
tippytippytango•2mo ago
Above I was talking more generally about full autonomy. I agree the combined human + fsd system can be at least as safe as a human driver, perhaps more, if you have a good driver. As a frequent user of FSD, it's unreliability can be a feature, it constantly reminds me it can't be fully trusted, so I shadow drive and pay full attention. So it's like having a second pair of eyes on the road.

I worry that when it gets to 10,000 mile per incident reliability that it's going to be hard to remind myself I need to pay attention. At which point it becomes a de facto unsupervised system and its reliability falls to that of the autonomous system, rather than the reliability of human + autonomy, an enormous gap.

Of course, I could be wrong. Which is why we need some trusted third party validation of these ideas.

terminalshort•2mo ago
Yeah, I agree with that. There's a potentially dangerous attention gap that could just play into the fundamental weakness of the human brain's ability to pay attention for long periods of time with no interaction. Unfortunately I don't see any possible way to validate this without letting the tech loose. You can't get good data on this without actual driving in real road conditions.
Veserv•2mo ago
At a certain point you do need to test in real road conditions. However, there is absolutely no need to jump straight from testing in lab conditions and “testing” using unmonitored, untrained end users.

You use professional trained operators with knowledge of the system design and operation using a designed safety plan to minimize prototype risks. At no point should your test plan increase danger to members of the public. Only when you fix problems faster than that test procedure can find do you expand scope.

If you follow the standard automotive pattern, you then expand scope to your untrained, but informed employees using monitored systems. Then untrained employees, informed employees using production systems. Then informed early release customers. Then once you stop being able to find problems regularly at all of those levels do you do a careful monitored release to the general public verifying the safety properties are maintained. Then you finally have a fully released “safe” product.

buran77•2mo ago
FSD never drives alone. It's always supervised by another driver legally responsible to correct. More importantly we have no independently verified data about the self driving incidents. Quite the opposite, Tesla repeatedly obscured data or impeded investigations.

I've made this comparison before but student drivers under instructor supervision (with secondary controls) also rarely crash. Are they the best drivers?

I am not a plane pilot but I flew a plane many times while supervised by the pilot. Never took off, never landed, but also never crashed. Am I better than a real pilot or even in any way a competent one?

surgical_fire•2mo ago
> FSD never drives alone.

So much for "Full Self" whatever.

FeloniousHam•2mo ago
I'll grant that the marketing oversells the capabilities of the system, but (as I have commented repeatedly in these FSD threads): anyone using it knows within a couple days their comfort level. I'm utterly unconvinced that any user is actually confused about the capacity of the system just because it's named "Autopilot" or "Full Self Driving" is not telling the truth.

The fact of the technology is that while imperfect, it is absolutely a marvel, and incredibly useful. I will never drive home again after having a beer after work, or when I'm tired after a long day. I can only attribute the angry skepticism in the comments to willful ignorance or lack of in-the-seat experience. I use it everyday, it drives me driveway to parking with only occasional interventions (per week!).

I'll throw in that my wife hates it (as a passenger or driver), but she has a much lower tolerance for any variance from expected human driving behaviour (eg. lane choices, overly cautious behaviour around cars waiting to enter traffic, etc).

surgical_fire•2mo ago
> I use it everyday, it drives me driveway to parking with only occasional interventions.

Let's hope you never become statistic.

I am happy that FSD is not permitted where I live. I would be concerned to drive close to one. Call it willful ignore if you prefer.

buran77•2mo ago
> I can only attribute the angry skepticism in the comments to willful ignorance or lack of in-the-seat experience

Next to "the latest version really fixed it, for realsies this time", the "anyone who doesn't like it is ignorant or has irrational hate for Tesla" must be the second most sung hymn among a small but entirely too vocal group of Tesla owners. Nothing brings down a conversation as quickly as someone like you, trying to justify your purchase by insulting everyone who doesn't agree with your sunk-cost-fallacy-driven opinions.

FeloniousHam•2mo ago
> Nothing brings down a conversation as quickly as someone like you, trying to justify your purchase by insulting everyone who doesn't agree with your sunk-cost-fallacy-driven opinions.

I don't have any sunk cost in FSD. The car, sure, but it's a fine electric car that I got when there weren't many options (especially at a reasonable price).

I felt I was being generous. My inclination is that animosity to Musk's odious politics clouds the rational judgement of many critics (and they've mostly have no first-hand experience with FSD for any length of time).

simondotau•2mo ago
It can be misleading to directly compare disengagements to actual catastrophic incidents.

The human collision numbers only count actual incidents, and even then only ones which have been reported to insurance/authorities. It doesn't include many minor incidents such as hitting a bollard, or curb rash, or bump-and-run incidents in car parks, and even vehicle-on-vehicle incidents when both parties agree to settle privately. And the number certainly excludes ALL unacceptably close near-misses. There's no good numbers for any of these, but I'd be shocked if minor incidents weren't an of magnitude more common, and near misses another order of magnitude again.

Whereas an FSD disengagement could merely represent the driver's (very reasonable) unwillingness to see if the software will avoid the incident itself. Some disengagements don't represent a safety risk at all, such as when the software is being overly cautious, e.g. at a busy crosswalk. Some disengagements for sure were to avoid a bad situation, though many of these would have been non-catastrophic (such as curbing a wheel) and not a collision which would be included in any human driver collision statistics.

omgwtfbyobbq•2mo ago
As a robotaxi, yes. That's why Teslas rollout is relatively small/slow, has safety monitors, etc...

FSD, what most people use, is ADAS, even if it performs a lot of the driving tasks in many situations, and the driver needs to always be monitoring it, no exceptions.

The same applies to any ADAS. If it doesn't work for in a situation, the driver has to take over.

julianlam•2mo ago
Sounds a lot like chickenization.
tonyhart7•2mo ago
thankless works and againts all odds

wonder what driving force behind this, because at somepoint money didnt matters

cduzz•2mo ago
My question about the safety of "FSD" or whatever autonomous cars is -- how much safety reputation can you burn in a new market that offers huge convenience? Which is more important, being first or being best?

Apple's famous for not being first, but being best to a well established market and cleaning up.

People certainly don't remember de havilland, who was first to the market of "passenger jet airliner".

I certainly won't be putting my kids in any of these, and especially not in a vehicle operated by a company with a "devil may care" approach to safety.

Sure, it works fine in Arizona or Texas; let's see it work okay with a snow storm in Boston.

torginus•2mo ago
This is a bit of an off topic but I'm sorta interested in a general survey of how other manufacturers stack up compared to big 2, particularly Mercedes from Europe, who're said to have a pretty good solution and all the Chinese brands - it's not uncommon to see even mid-range Chinese cars with lidar integrated, wonder how good they are.