frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Trump moves to dismantle major US climate research center in Colorado

https://www.usatoday.com/story/news/politics/2025/12/16/trump-dismantle-national-center-atmospher...
1•garrettdreyfus•19s ago•0 comments

Ask HN: Would you use self-hosted payments to avoid payout holds?

1•alexmccain6•49s ago•0 comments

Evaluating AI's ability to perform scientific research tasks

https://openai.com/index/frontierscience/
1•EvgeniyZh•4m ago•0 comments

We validate idea and making $1000 MRR in 4 month

https://www.indiehackers.com/post/how-we-validate-idea-and-making-1000-mrr-in-4-month-6330abd1e9
1•manobb•6m ago•0 comments

Nex-AGI DeepSeek-v3.1-Nex-N1

https://huggingface.co/nex-agi/DeepSeek-V3.1-Nex-N1
1•kristianp•8m ago•0 comments

DeepSeek v3.1 Nex N1

https://openrouter.ai/nex-agi/deepseek-v3.1-nex-n1:free
1•kristianp•8m ago•0 comments

GPT-5.2-high LMArena scores released, OpenAI falls from #6 to #13

https://lmarena.ai/leaderboard
2•reed1234•11m ago•1 comments

TNI and TNI-R: Transient Node Integration for Precision Orbital

https://zenodo.org/records/17809868
1•okushigue•14m ago•1 comments

Aerogel can make the ocean drinkable [video]

https://www.youtube.com/shorts/OW9Mq3wrEqY
1•thelastgallon•16m ago•0 comments

ACM Digital Library started showing AI summaries of articles with abstracts

https://dl.acm.org/generative-ai/summarizations
2•andrybak•16m ago•1 comments

Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs

https://arxiv.org/abs/2512.09742
3•bediger4000•16m ago•0 comments

'Twitter never left:' X sues Operation Bluebird for trademark infringement

https://www.theverge.com/news/845882/x-corp-operation-bluebird-twitter-lawsuit-trademark-infringe...
2•g-b-r•19m ago•0 comments

The Joy of Hex and Brouwer's Fixed Point Theorem (2013)

https://vigoroushandwaving.wordpress.com/2013/09/30/the-joy-of-hex-and-brouwers-fixed-point-theorem/
2•nill0•19m ago•0 comments

New Ways to Corrupt LLMs

https://cacm.acm.org/blogcacm/new-ways-to-corrupt-llms/
3•zdw•28m ago•0 comments

AI and camera counter human trafficking

https://spectrum.ieee.org/traffickcam-human-trafficking-hotel-ai
1•asdefghyk•32m ago•1 comments

AI Browser Extensions Leave Fingerprints Everywhere

https://webdecoy.com/blog/detect-ai-browser-extensions-claude-chatgpt-copilot/
3•cport1•33m ago•0 comments

Norman Podhoretz, 1930-2025

https://www.commentary.org/john-podhoretz/norman-podhoretz-1930-2025/
1•stmw•42m ago•1 comments

FTX insider Caroline Ellison has been moved out of prison

https://www.businessinsider.com/caroline-ellison-prison-release-ftx-sam-bankman-fried-2025-12
10•harambae•49m ago•1 comments

The Genius Effects of Old Movies [video]

https://www.youtube.com/watch?v=TunR4zCQ5Fk
3•billybuckwheat•55m ago•1 comments

Show HN: Made a Visionboard Tool

https://visionboardit.art/
1•girlwhocode•55m ago•0 comments

Minimum Viable Benchmark (For Evaluating LLMs)

https://blog.nilenso.com/blog/2025/11/28/minimum-viable-benchmark/
1•todsacerdoti•56m ago•0 comments

Fara-7B: An Efficient Agentic Model for Computer Use

https://www.microsoft.com/en-us/research/blog/fara-7b-an-efficient-agentic-model-for-computer-use/
1•mjshashank•58m ago•0 comments

Apple TV's new intro was done practical, not CGI or AI [video]

https://www.youtube.com/shorts/C3uLRJGVkmo
4•busymom0•1h ago•3 comments

Luminar Technologies, Inc. Initiates Voluntary Chapter 11 Proceedings

https://investors.luminartech.com/news-events/press-releases/detail/110/luminar-technologies-inc-...
2•rguiscard•1h ago•0 comments

The Lost Generation

https://www.compactmag.com/article/the-lost-generation/
6•koolba•1h ago•0 comments

Video: Lunar impact flash detected on the moon by Armagh Observatory

https://phys.org/news/2025-12-video-lunar-impact-moon-armagh.html
3•1659447091•1h ago•0 comments

How I Assess Open Source Libraries

https://nesbitt.io/2025/12/15/how-i-assess-open-source-libraries.html
1•gpi•1h ago•0 comments

Deaf Crocodile Blu-Rays

https://deafcrocodile.com/collections/blu-rays
2•gregsadetsky•1h ago•0 comments

The Core Problems of AI Coding

https://magong.se/posts/real-problems-ai-coding-lesswrong
2•mikasisiki•1h ago•0 comments

Project Zeros New Website

https://projectzero.google/
1•0xkato•1h ago•1 comments
Open in hackernews

Tesla reports another Robotaxi crash

https://electrek.co/2025/12/15/tesla-reports-another-robotaxi-crash-even-with-supervisor/
119•hjouneau•2h ago

Comments

thomassmith65•2h ago

  With 7 reported crashes at the time, Tesla’s Robotaxi was crashing roughly 
  once every 40,000 miles [...]. For comparison, the average human driver 
  in the US crashes about once every 500,000 miles. 
  This means Tesla’s “autonomous” vehicle, which is supposed to be the future of safety, 
  is crashing 10x more often than a human driver.
That is a possible explanation for why Musk believes in people having 10x as many children. /s
TheAmazingRace•2h ago
When you have a CEO like Elon who swears up and down that you only need cameras for autonomous driving vehicles, and skimping out on crucial extras like Li-DAR, can anyone be surprised by this result? Tesla also likes to take the motto of "move fast and break things" to a fault.
DoesntMatter22•1h ago
Turns out Waymo hits a lot of things too. Why isn't Lidar stopping that?
mmooss•1h ago
What is Waymo's accident rate? (Edit: Tesla's is in the article, at least for that region.)
tasty_freeze•1h ago
and there is a linked article about Waymo's data reporting, which is much more granular and detailed, whereas Tesla's is lumpy and redacted. Anyway, Waymo's data with more than 100M miles of self-supervised driving shows a 91% reduction in accidents vs humans. Tesla's is 10x the human accident rate according to the Austin data.
TheAmazingRace•1h ago
Last I checked, Robotaxi has a safety driver, whereas Waymo is completely self driving, yet has a very good safety record. That speaks volumes to me.

https://waymo.com/safety/impact/

natch•1h ago
You’re posting a link from Waymo itself as evidence and pretending that’s an equivalent information source versus an article posted by a disingenuous Tesla hater. The gullibility here is striking. Accept the biased article hook, line, and sinker, and post another biased source which it seems you also have no impulse to question whatsoever.
dang•4m ago
Could you please not post in the flamewar style to HN? We're trying for something else here.

https://news.ycombinator.com/newsguidelines.html

themafia•1h ago
Completely self driving? Don't they go into a panic mode, stop the vehicle, then call back to a central location where a human driver can take remote control of the vehicle?

They've been seen doing this at crime scenes and in the middle of police traffic stops. That speaks volumes too.

daheza•1h ago
Incorrect humans never take over the controls. An operator is presented with a set of options and they choose one, which the car then performs. The human is never in direct control of the vehicle. If this process fails then they send a physical human to drive the car.
guywithahat•1h ago
The more I've looked into the topic the less I think the removal of lidar was a cost issue. I think there are a lot of benefits to simplifying your sensor tech stack, and while I won't pretend to know the best solution removing things like lidar and ultrasonic sensors seem to have been a decision about improving performance. By doubling down on cameras your technical team can remain focused on certain sensor technology, and you don't have to deal with data priority and trust in the same way you do when you have a variety of sensors.

The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.

vjvjvjvjghv•1h ago
Usually you would go in with the max amount of sensors and data, make it work and then see what can be left out. It seems dumb to limit yourself from beginning if you don’t know yet what really works. But then I am not a multi billionaire so what do i know?
fpoling•51m ago
Well we know that vision works based on human experience. So few years ago it was a reasonable bet that cameras alone could solve this. The problem with Tesla is that they still continue to insist on that after it became apparent that vision alone with the current tech and machine learning does not work. They even do not want to use a radar again even if the radar does not cost much and is very beneficial for safety.
bsder•30m ago
> Well we know that vision works based on human experience.

Actually, we know that vision alone doesn't work.

Sun glare. Fog. Whiteouts. Intense downpours. All of them cause humans to get into accidents, and electronic cameras aren't even as good as human eyes due to dynamic range limitations.

Dead reckoning with GPS and maps are a huge advantage that autonomous cars have over humans. No matter what the conditions are, autonomous cars know where the car is and where the road is. No sliding off the road because you missed a turn.

Being able to control and sense the electric motors at each wheel is a big advantage over "driving feel" from the steering wheel and your inbuilt sense of acceleration.

Radar/lidar is just all upside above and beyond what humans can do.

_aavaa_•1h ago
Having multiple sources of data is a benefit, not a problem. Entire signal processing and engineering domains exist to take advantage of this. Even the humble Kalyan filter lets you combine multiple noisy sources to get a more accurate result than would be possible if using any one source.
fpoling•57m ago
Kalman filters and more advanced aggregators add non-trivial latency. So even if one does not care about cost, there can be a drawback from having an extra sensor.
_aavaa_•56m ago
Yes, there are tradeoffs to be made, but having to reconcile multiple sensors is not intrinsically a negative.

But also, if you didn’t get the right result, I don’t care how quickly you didn’t get it.

MadnessASAP•31m ago
The latency from video capture and recognition is going to be so significant that it makes all other latency sources not even worth mentioning.
guywithahat•46m ago
What I've heard out of Elon and engineers on the team is that some of these variations of sensors create ambiguity, especially around faults. So if you have a camera and a radar sensor and they're providing conflicting information, it's much harder to tell which is correct compared to just having redundant camera sensors.

I will also add in my personal experience, while some filters work best together (like imu/gnss), we usually either used lidar or camera, not both. Part of the reason was combining them started requiring a lot more overhead and cross-sensor experts, and it took away from the actual problems we were trying to solve. While I suppose one could argue this is a cost issue (just hire more engineers!) I do think there's value in simplifying your tech stack whenever possible. The fewer independent parts you have the faster you can move and the more people can become an expert on one thing

Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.

ambicapter•39m ago
I don’t understand how running into difficulties when trying to solve a problem can be interpreted as “[taking] away from the actual problem”.
karlgkk•33m ago
Counterpoint: Waymo

> solid engineering defense for moving towards just computer vision

COUNTERPOINT: WAYMO

solfox•1h ago
To tell what? Waymo is easily 5 years ahead of the tech alone, let alone the roll out of autonomous service. They may eventually catch up but they are definitely behind.
dyauspitr•48m ago
This is a solved problem. Many people I know including myself use Waymo’s on a weekly basis. They are rock solid. Waymo has pretty unequivocally solved the problem. There is no wait and see.
goosejuice•31m ago
Nevermind the Waymos rolling by stopped school busses.

https://www.npr.org/2025/12/06/nx-s1-5635614/waymo-school-bu...

xp84•22m ago
This seems solvable, no? Not saying it isn’t really damn important, but those have stop signs and flashing lights. It seems like they can fix that.
dyauspitr•7m ago
You can cavil about this but it’s weak.
lotsofpulp•45m ago
>The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.

Price is a factor. I’ve been using the free self driving promo month on my model Y (hardware version 4), and it’s pretty nice 99% of the time.

I wouldn’t pay for it, but I can see a person with more limited faculties, perhaps due to age, finding it worthwhile. And it is available now in a $40k vehicle.

It’s not full self driving, and Waymo is obviously technologically better, but I don’t think anyone is beating Tesla’s utility to price ratio right now.

cameldrv•6m ago
Honestly I think it's more that he was backed into a corner. The Teslas from ~9 years ago when they first started selling "full self driving" as an option, had some OK cameras and, by modern standards, a very crappy radar.

The radar they had really couldn't detect stationary objects. It relied on the doppler effect to look for moving objects. That would work most of the time, but sometimes there would be a stationary object in the road, and then the computer vision system would have to make a decision, and unfortunately in unusual situations like a firetruck parked at an angle to block off a crash site, the Tesla would plow into the firetruck.

Given that the radar couldn't really ever be reliable enough to create a self driving vehicle, after he hired Karpathy, Elon became convinced that the only way to meet the promise was to just ignore the radar and get the computer vision up to enough reliability to do FSD. By Tesla's own admission now, the hardware on those 2016+ vehicles is not adequate to do the job.

All of that is to say that IMO Elon's primary reason for his opinions about Lidar are simply because those older cars didn't have one, and he had promised to deliver FSD on that hardware, and therefore it couldn't be necessary, or he'd go broke paying out lawsuits. We will see what happens with the lawsuits.

rich_sasha•31m ago
Musk's success story is taking very bold bets almost flippantly. These things have a premium associates with them, because to most people they are so toxic that they would never consider them.

Every time when he has the choice to do something conservative or bold, he goes for the latter, and so long as he has a bit of luck, that is very much a winning strategy. To most people, I guess the stress of always betting everything on red would be unbearable. I mean, the guy got a $300m cash payout in 1999! Hands up who would keep working 100 hour weeks for 26 years after that.

I'm not saying it is either bad or good. He clearly did well out of it for himself financially. But I guess the whole cameras/lidar thing is similar. Because it's big, bold, from the outset unlikely to work, and it's a massive "fake it till you make it" thing.

But if he can crack it, again I guess he hits the jackpot. Never mind cars, they are expensive enough that Lidar cost is a rounding error. But if he can then stick 3d vision into any old cheap cameras, surely that is worth a lot. In fact wasn't this part of Tesla's great vision - to diversify away from cars and into robots etc. I'm sure the military would order thousands and millions of cheapo cameras that work 90% as well as a fancy Lidar - while being fully solid state etc.

That he is using his clients as lab rats for it is yet another reason why I'm not buying one. But to me this is totally in character for Musk.

BrenBarn•24m ago
The fact that he's able to fake it until he makes it is a failure of our society. He should be impoverished and incarcerated.
7e•1h ago
Who will save humanity from Elon Musk?
platevoltage•36m ago
Humanity could save themselves if they would get over their Stockholm Syndrome.
altairprime•1h ago
Slap a STUDENT DRIVER bumper sticker on them so we can all give them space!
kevin_thibedeau•1h ago
It should say INDUSTRIAL ROBOT. You wouldn't willingly enter the hazard zone of a KUKA. Why should we casually accept them roaming free?
0_____0•1h ago
If you're familiar with industrial hazard mitigation, looking at how roadways are constructed is kind of crazy making.
dylan604•52m ago
I'm now imagining Robotaxis with the impact absorbing extensions that highway trucks have when leading the lane closures.

Something like this: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh...

koinedad•1h ago
The title makes it sound way worse than the 7 reported crashes listed in the article. I’d be interested to see a comparison with Waymo and other self driving technologies in the same area (assuming the exist).
phyzome•1h ago
Converting things to rates is how you understand them in a meaningful way, particularly for things that are planned to be expanded to full scale.

(The one thing I would like to see done differently here is including an error interval.)

jsight•15m ago
Yeah, I'm glad that they are trying to do a rate, the problem is that the numerator in the human case is likely far larger than what they are indicating.

Of the Tesla accidents, five of them involved either collisions with fixed objects, animals, or a non-injured cyclist. Extremely minor versions of these with human drivers often go unreported.

Unfortunately, without the details, this comparison will end up being a comparison between two rates with very different measurement approaches.

Rebelgecko•21m ago
I couldn't find Waymo's stats for all crashes in 12 seconds of googling, but they have breakdowns for "crashes with serious injury" (none yet) "crashes resulting in injury" (5x reduction) and "crashes where airbag deployed" (14x reduction), relative to humans in Austin

Austin has relatively low miles so the confidence intervals are wider but not too far from what they show for other cities

tomhow•12m ago
We updated the title to the original. All, please remember the section of the guidelines about editorialising of title.

Please use the original title, unless it is misleading or linkbait; don't editorialize.

https://news.ycombinator.com/newsguidelines.html

bparsons•1h ago
This is the sort of thing that occurs when the interests of the public become subordinate to the interests of a lawless aristocracy. Financial, social and public safety considerations are costs that can be transferred to the public to preserve the wealth of a few individuals.
themafia•1h ago
Was there a time when the interests of the public weren't subordinate?
davidw•41m ago
It's not a binary on/off thing. It's a lot, lot worse right now.
tigranbs•1h ago
IMHO, this is not too bad! But obviously, coming from the software product industry, everyone knows that building features isn't the same as operating in practice and optimizing based on the use case, which takes a ton of time.

Waymo has a huge head start, and it is evident that the "fully autonomous" robotaxi date is far behind what Elon is saying publicly. They will do it, but it is not as close as the hype suggests.

phyzome•1h ago
...why would "more than 10x less safe than a human driver" be "not too bad"?
dylan604•55m ago
It could have been 100x or 1000x or...
Rebelgecko•12m ago
"worse than a human" seems like a fail
verteu•48m ago
It's pretty bad given there's a Tesla employee behind the wheel supervising.
Veserv•1h ago
The most damning thing is that the most advanced version, with the most modern hardware, with perfectly maintained vehicles, running in a pre-trained geofence that is pre-selected to work well [1] with trained, professional safety drivers, with scrutinized data and reporting average a upper bound of 40,000 miles per collision (assuming the mileage numbers were not puffery [3]).

Yet somehow they claim that old versions, using old hardware, on arbitrary roads, using untrained customers as safety drivers somehow average 2.9 million miles per collision in non-highway environments [2], a ~72.5x difference in collision frequency, and 5.1 million miles per collision in all environments, a ~175x(!) difference in collision frequency, when their reporting and data are not scrutinized.

I guess their most advanced software and hardware and professional safety drivers just make it 175x more dangerous.

[1] https://techcrunch.com/2025/05/20/musk-says-teslas-self-driv...

[2] https://www.tesla.com/fsd/safety

[3] https://www.forbes.com/sites/alanohnsman/2025/08/20/elon-mus...

[3.a] Tesla own attorneys have argued that statements by Tesla executives are such nonsense that no reasonable person would believe them.

natch•1h ago
Most minor fender benders are not reported by the involved people, whereas even the most minor ones often caused by other humans must be assiduously reported by any company doing such a rollout.

A responsible journalist with half a clue would mention that, and tell us how that distorts the numbers. If we correct for this distortion, it’s clear that the truth would come out in Tesla’s favor here.

Instead the writer embraces the distortion, trying to make Tesla look bad, and one is left to wonder if they are intentionally pushing a biased narrative.

bryanlarsen•19m ago
Every 40,000 miles is every 2nd year for the average American. Every 500,000 miles is once in a lifetime for the average American.

Using your own personal experience, it should be obvious that trivial fender benders are more common than once per lifetime but significantly less common than one every couple of years.

wizardforhire•1h ago
How many more people have to die?

In the past it took a lot less to get the situation fixed… and these were horrendous situations! [1][2] And yet tesla is a factor of 10 worse!

[1] https://en.wikipedia.org/wiki/Ford_Pinto

[2] https://en.wikipedia.org/wiki/Firestone_and_Ford_tire_contro...

ajross•1h ago
FTA: "For comparison, the average human driver in the US crashes about once every 500,000 miles."

Does anyone know what the cite for this might be? I'm coming up empty. To my knowledge, no one (except maybe insurance companies) tallies numbers for fender bender style accidents. This seems like a weirdly high number to me, it's very rare to find any vehicle that reaches 100k miles without at least one bump or two requiring repair.

My suspicion is that this is a count of accidents involving emergency vehicle or law enforcement involvement? In which case it's a pretty terrible apples/oranges comparison.

habosa•44m ago
Yeah as much as I think that Tesla is full of shit, there’s no way this is true. I don’t know a single person that’s driven 500k miles lifetime but everyone I know has been in at least one minor accident.
bryanlarsen•17m ago
The average American drives more than 600k miles in a lifetime.
senordevnyc•34m ago
Yeah, I think that might be the stat for “serious” accidents
jsight•12m ago
Somewhat amusingly, the human rate should also be filtered based upon conditions. For years people have criticized Tesla for not adjusting for conditions with their AP safety report, but this analysis makes the same class of mistake.

1/500k miles that includes the interstate will be very different from the rate for an urban environment.

furyofantares•12m ago
> This seems like a weirdly high number to me, it's very rare to find any vehicle that reaches 100k miles without at least one bump or two requiring repair.

It goes seem like a high number to me - in 30 years of pretty heavy driving I've probably done about 500k miles and I've definitely had more than one incident. But not THAT many more than one, and I've put 100k miles on a few vehicles with zero incidents. Most of my incidents were when I was a newer driver who drove fairly recklessly.

pavon•1m ago
This NHTSA report agrees with those numbers. It reports 6,138,359 crashes and 3,246,817,000,000 Vehicle Miles Traveled in the US for 2023, which comes to about 530k miles per crash.

[1] https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...

jmpman•1h ago
I’m still waiting until I see little X Æ A-Xii playing in the street while Tesla Robotaxis deliver passengers before I buy these arguments. Until then, my children are playing in the street while these autonomous vehicles threaten their safety. I’m upset that this is forced upon the public by the government.
dylan604•57m ago
This would imply you feel the parent of said kid cares about said kid more than parent's company.
awestroke•13m ago
At this point he's just an anxious wreck on ketamine fully trusting a broken gut feel in each and every situation
93po•18m ago
Electrek notoriously lies and fibs and stretches the truth to hate on Tesla and Elon as much as possible.

This one is misleading both in that 8 "crashes" is statistically insignificant to draw conclusions as to its safety compared to humans, but also because these 'crashes' are not actually crashes and instead a variety of things, including hitting a wild animal of unknown size or potentially minor contact with other objects of unspecified impact strength.

They make other unsubstantiated and likely just wrong claims:

> The most critical detail that gets lost in the noise is that these crashes are happening with a human safety supervisor in the driver’s seat (for highway trips) or passenger seat, with a finger on a kill switch.

The robotaxi supervisors are overwhelmingly only the passenger seat - I've never actually seen any video footage of them in the driver seat, and Electrek assuredly has zero evidence of how many of the reported incidents involved someone in the driver seat. Additionally, these supervisors in the passenger seat are not instructed to prevent every single incident (they arent going to emergency brake for a squirrel) and to characterize them as "babysitting to prevent accidents" is just wrong.

This article is full of other glaring problems and lies and mistruths but it's genuinely not worth the effort to write 5 pages on it.

If you want some insight on why Fed Lambert might be doing this, look no further than the bottom of the page: Fred sells and shares "investment tips" which, you guessed it, are perpetually trying to convince people to sell and short Telsa: https://x.com/FredLambert/status/1831731982868369419

Feel free to look at his other posts: it's 95% trying to convince people that Telsa is going bankrupt tomorrow, and trying to slam Elon as much as possible - sometimes for good reasons (transphobia) but sometimes in ways that really harms his credibility, if he actually had any

Lambert has also been accused of astrotrufing in lawsuits, and had to go through a settlement that required him to retract all the libel he had spread: https://www.thedrive.com/tech/21838/the-truth-behind-electre...

The owner of Eletrek, Seth Weintraub, also notably does the same thing: https://x.com/llsethj/status/1217198837212884993

narrator•16m ago
I can imagine why they redact the reports so much: Elon hating NGOs would gladly pay a lawyer to spend as much time suing Tesla for each crash, even if completely frivolously and with no hope of recouping any of the time and money spent, and think they were doing the great work of social justice.
jsight•7m ago
I spent a little bit of time poking at Gemini to see what it thought the accident rate in an urban area like Austin would be, including unreported minor cases. It estimated 2-3/100k miles. This is still lower than the extrapolation in the article, but maybe not notably lower.

We need far higher quality data than this to reach meaningful conclusions. Implying conclusions based upon this extrapolation is irresponsible.