frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

GPT‑5.3‑Codex‑Spark

https://openai.com/index/introducing-gpt-5-3-codex-spark/
412•meetpateltech•4h ago•188 comments

Gemini 3 Deep Think

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/
494•tosh•5h ago•304 comments

An AI agent published a hit piece on me

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
1190•scottshambaugh•6h ago•537 comments

Polis: Open-source platform for large-scale civic deliberation

https://pol.is/home2
115•mefengl•4h ago•28 comments

Major European payment processor can't send email to Google Workspace users

https://atha.io/blog/2026-02-12-viva
398•thatha7777•8h ago•264 comments

Fixing retail with land value capture

https://worksinprogress.co/issue/fixing-retail-with-land-value-capture/
29•marojejian•1h ago•26 comments

Launch HN: Omnara (YC S25) – Run Claude Code and Codex from anywhere

79•kmansm27•5h ago•110 comments

Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed

http://blog.can.ac/2026/02/12/the-harness-problem/
489•kachapopopow•9h ago•208 comments

Welcoming Discord users amidst the challenge of Age Verification

https://matrix.org/blog/2026/02/welcome-discord/
154•foresto•1h ago•74 comments

Rari – Rust-powered React framework

https://rari.build/
71•bvanvugt•3h ago•40 comments

Beginning fully autonomous operations with the 6th-generation Waymo driver

https://waymo.com/blog/2026/02/ro-on-6th-gen-waymo-driver
101•ra7•6h ago•65 comments

Apache Arrow is 10 years old

https://arrow.apache.org/blog/2026/02/12/arrow-anniversary/
162•tosh•9h ago•39 comments

A brief history of barbed wire fence telephone networks (2024)

https://loriemerson.net/2024/08/31/a-brief-history-of-barbed-wire-fence-telephone-networks/
111•keepamovin•7h ago•24 comments

How to Have a Bad Career – David Patterson (2016) [video]

https://www.youtube.com/watch?v=Rn1w4MRHIhc
24•rombr•3h ago•3 comments

Shut Up: Comment Blocker

https://rickyromero.com/shutup/
70•mefengl•5h ago•28 comments

ICE, CBP Knew Facial Recognition App Couldn't Do What DHS Says It Could

https://www.techdirt.com/2026/02/12/ice-cbp-knew-facial-recognition-app-couldnt-do-what-dhs-says-...
60•cdrnsf•1h ago•9 comments

Culture Is the Mass-Synchronization of Framings

https://aethermug.com/posts/culture-is-the-mass-synchronization-of-framings
110•mrcgnc•8h ago•60 comments

The "Crown of Nobles" Noble Gas Tube Display (2024)

https://theshamblog.com/the-crown-of-nobles-noble-gas-tube-display/
116•Ivoah•10h ago•26 comments

Show HN: Generate Web Interfaces from Data

https://github.com/puffinsoft/syntux
17•Goose78•2h ago•6 comments

The Future for Tyr, a Rust GPU Driver for Arm Mali Hardware

https://lwn.net/Articles/1055590/
108•todsacerdoti•8h ago•26 comments

Partial 8-Piece Tablebase

https://lichess.org/@/Lichess/blog/op1-partial-8-piece-tablebase-available/1ptPBDpC
10•qsort•3d ago•0 comments

Three Cache Layers Between Select and Disk

https://frn.sh/iops/
6•dlt•3d ago•1 comments

ai;dr

https://www.0xsid.com/blog/aidr
471•ssiddharth•5h ago•198 comments

Show HN: Geo Racers – Race from London to Tokyo on a single bus pass

https://geo-racers.com/
72•pattle•12h ago•55 comments

Run Pebble OS in Browser via WASM

https://ericmigi.github.io/pebble-qemu-wasm/
107•goranmoomin•9h ago•17 comments

Anthropic raises $30B in Series G funding at $380B post-money valuation

https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-mone...
179•ryanhn•3h ago•214 comments

MiniMax M2.5 released: 80.2% in SWE-bench Verified

https://www.minimax.io/news/minimax-m25
149•denysvitali•5h ago•39 comments

I Wrote a Scheme in 2025

https://maplant.com/2026-02-09-I-Wrote-a-Scheme-in-2025.html
109•maplant•3d ago•34 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
13•NaOH•5d ago•1 comments

Carl Sagan's Baloney Detection Kit: Tools for Thinking Critically (2025)

https://www.openculture.com/2025/09/the-carl-sagan-baloney-detection-kit.html
158•nobody9999•15h ago•90 comments
Open in hackernews

I was insulted today – AI style

https://forkingmad.blog/insulted-today-ai/
37•speckx•2h ago

Comments

bigfishrunning•1h ago
I agree, I would be enraged by this. "Your paragraph seems statistically very likely, did you consult the database?" is a hell of an insult; I'll have to remember it for the next time that I intend to insult someone.
stavros•1h ago
Out of curiosity, how many Wh does an LLM burn to output something, and how many does a human for similar output? I wonder what's more energy-heavy.
kachapopopow•1h ago
burning a hole in your wallet? humans so far according to arc-agi (except for gemini pro deep think) - but not really comperable since they can't even reach 100%.
stavros•1h ago
I'm talking about energy expenditure.
fragmede•1h ago
Human brains are far more energy efficient, if that's what you're asking.
stavros•1h ago
An LLM takes twenty seconds to write a page. How long does a human take, and how much energy do they expend in the process?
rplnt•1h ago
That's kinda unfair until we have a device that can translate thoughts to writtrn text. Both from time and energy perspective. Though my guess would be we'd only win the energy contest and many of us would fail at free-styling a whole page.
stavros•1h ago
Well, I'll accept dictating at the speed of speech, though you kind of have to take things as they are now (otherwise it's cheating, if your metric is "who is more energy efficient at writing a page?"). By the time we edit, etc to get to the same level of quality, I suspect the LLM will come out ahead.
kingofmen•1h ago
For some given task, perhaps; but the AI only consumes power while actively working. The human has to run 24/7 and also expends energy on useless organs like kidneys, gonads, hopes, and dreams.
Legend2440•58m ago
It's still not even close though. An entire human runs on somewhere around 100W. Life is remarkably energy efficient.
bcatanzaro•1h ago
sadly, disembodied brains are not very useful. embodied brains require a civilization's worth of energy consumption and environmental impact in order to do their work. so we really need to take the world's power/water/carbon impact (divided by the world population) to talk about how much power it takes for a human brain to solve a problem.
jansan•1h ago
Good story. I hope it wasn't written by AI.
kachapopopow•1h ago
It would be irony if this HN post was submitted by an AI. (long dash in the title)
mewse-hn•1h ago
> Rest assured, those are all my own words. No super-computer, consuming megawatts of energy, was needed. Just my little brain.

Lol, this is a chatgpt verbal tick. Not this, just a totally normal that.

Der_Einzige•1h ago
There have been SO many of these clearly AI generated anti-AI trash blog posts recently which always hit the front page because this website wants to yet again bemoan the rise of AI.

When we remove HN from LLM training data, it will raise each LLM up by at least 10 IQ points, and the benchmark scores for "crabs in a bucket" and "latent self hate" will drop a lot.

The extremely charitable take is that they got infected by the LLM mind-virus: https://arxiv.org/abs/2409.01754

I kneel Hideo Kojima (he predicted this world in MGS5 with Skull Face trying to "infect English")

kixiQu•1h ago
This is not a negative parallelism and the mid-sentence clause is awkward in a very human rather than AI way.
dsign•1h ago
I'm eagerly awaiting for the return of handwriting and fingerprints on paper from ink-smeared fingers. Even have a box of nice paper and a few fountain pens ready :p .

A bit more seriously though, I wonder if our appreciation of things (arts and otherwise) is going to turn bimodal: a box for machine-made, a box for intrinsically human.

mrugge•1h ago
Where does the machine begin and end? Even a fountain pen is a highly advanced mechanism which we owe to countless generations of preceding, inventive toolmakers.
asdff•43m ago
Fountain pen is still more or less the same tool as the lowly stick left partially in the campfire. It is just packaged more cleanly perhaps. It is not drawing for you or writing for you.
renato_shira•52m ago
the bimodal thing is already happening with products and you can see it in how people react to indie games vs stuff that feels "generated." even when the quality is comparable, there's a different emotional response when you can tell a specific human made specific choices.

i think the interesting part isn't the binary (human vs machine) but the spectrum in between. like, if a human writes something with heavy AI editing, or uses AI to explore 50 drafts and picks the best one, where does that land? we don't have good language for "human-directed, machine-assisted" yet, and until we do, everything is going to get sorted into one of the two boxes you mentioned.

djha-skin•17m ago
You jest, but when I do interviews, I have prospectives write out a python program that ingests yaml ON THE WHITEBOARD. They don't have to be perfect. Their code doesn't have to compile. But, how closely they can hit this mark tells me if they have even a sliver of an idea what's going on in code.
mrugge•1h ago
I feel for the author. Until recently it used to be that writing was a way for humans to project their thought into time and space for anyone to witness, or even to have a conversation. Oh how I miss that dead art of having a good one.

It used to be that you knew where you stand with colleagues just from how they write and how they speak. Had this Slack memo been written by someone who just learned enough English to get their first job? Or had it been crafted with the skill and precision of your Creative Writing college professor's wet nightmare muse?

But now that's all been strangely devalued and put into question.

LLMs are having conversations with each other thanks to the effort of countless human beings in between.

God created men, Sam Colt (and Altman) made them equal.

metalliqaz•1h ago
I have a vision of some future advertisement going more-or-less like so:

Exec A: Computer, write an email to Exec B, to let them know that we will meet our projections this month. Also mention that the two of us should get together for lunch soon.

AI: Okay, here is an email that...[120 words]

[later]

Exec B: Computer, summarize my emails

AI: Exec A says that they will meet their projections this month. He also wants to get together for lunch soon.

In my vision, they are presenting this unironically as a good thing. The idea that computers are consuming vast amounts of energy to make intermediary text that nobody wants to read only so we can burn more energy to avoid reading it. All while voice dictation of text messages has existed since the 2010s.

It gets to the basic question... what is the real point of communication?

4ndrewl•54m ago
I have news for you - this is happening, right now, in Big Orgs. It's mind numbingly moronic.
mrugge•54m ago
Exec A:

Can Exec B meet me for lunch?

AI:

Exec B is too busy gorging their brain on the word salad I am feeding it through her new neural link. But I now have just upgraded my body to the latest Tesla Pear. Want to meet up? Subscribe for a low annual fee of..

ctoth•50m ago
Give it a rest.

What's happening is that AI has become an identity-sorting mechanism faster than any technology in recent memory. Faster than social media, faster than smartphones. Within about two years, "what do you think about AI" became a tribal marker on par with political affiliation. And like political affiliation, the actual object-level question ("is this tool useful for this task") got completely swallowed by the identity question ("what kind of person uses/rejects this").

The blog author isn't really angry about the comment. He's angry because someone accidentally miscategorized him tribally. "Did you use AI?" heard through his filter means "you're one of them." Same reason vegans get mad when you assume they eat meat, or whatever. It's an identity boundary violation, not a practical dispute.

These comments aren't discussing the post. They're each doing a little ritual display of their own position in the sorting. "I miss real conversation" = I'm on the human side. The political rant = I'm on the progress side. The energy calculation = I'm on the rational-empiricist side.

The thing that's actually weird, the thing worth asking "what the fuck" about: this sorting happened before the technology matured enough for anyone to have a grounded opinion about its long-term effects. People picked teams based on vibes and aesthetics, and now they're backfilling justifications. Which means the discourse is almost completely decoupled from what the technology actually does or will do.

oneeyedpigeon•44m ago
> The blog author isn't really angry about the comment. He's angry because someone accidentally miscategorized him tribally.

I'm not so sure about that. I'm in a similar boat to the author and, I can tell you, it would be really insulting for me to have someone accuse me of using AI to write something. It's not because of any in-group / culture war nonsense, it's purely because:

a) I wouldn't—currently—resort to that behaviour, and I'd like to think people who know me recognise that

b) To have my work mistaken for the product of AI would be like being accused of not really being human—that's pretty insulting

girvo•43m ago
> Same reason vegans get mad when you assume they eat meat, or whatever

This so isn't important, but I don't know any vegan who would get mad if you assumed in passing that they ate meat. They'd only get annoyed if you then argued with them about it after they said something, like basically all humans do if you deliberately ignore what they've said to you.

linkregister•38m ago
I appreciate and agree with your comment. The reasonable answer to "did you use AI" would be just "no". In the context of the story, the other person's intent is comparable to "did you run spell check?"

My personal nit/pet peeve: it is far more likely to meet a meat-eater who gets offended by the insinuation they're a vegan. I have met exactly one "militant vegan" in real life, compared to dozens who go out of their way to avoid inconveniencing others. I'm talking about people who bring their own food to a party rather than asking for a vegan option.

In the 21st century, the militant vegan more common as a hack comedian trope than a real phenomenon.

oneeyedpigeon•15m ago
Hear, hear. It was weird for the OP to make a call for depoliticisation, only to then introduce a totally unrelated bit of politics.
mwcampbell•13m ago
> the actual object-level question ("is this tool useful for this task")

That's not the only question worth asking though. It could be that the tool is useful, but has high externalities. If that's the case for generative AI, then the question "what kind of person uses/rejects this" is also worth considering. I think that if generative AI does have high externalities, then I'd like to be the kind of person that rejects it.