frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Gemini 3 Flash: Frontier intelligence built for speed

https://blog.google/products/gemini/gemini-3-flash/
560•meetpateltech•4h ago•251 comments

Why do commercial spaces sit vacant?

https://archive.strongtowns.org/journal/2025/5/21/why-do-commercial-spaces-sit-vacant
53•NaOH•58m ago•33 comments

How SQLite is tested

https://sqlite.org/testing.html
136•whatisabcdefgh•3h ago•21 comments

FIFA Arrives on Netflix Games

https://www.netflix.com/tudum/articles/fifa-mens-world-cup-2026-game-on-netflix
24•0xedb•1h ago•21 comments

Show HN: High-Performance Wavelet Matrix for Python, Implemented in Rust

https://pypi.org/project/wavelet-matrix/
23•math-hiyoko•1h ago•0 comments

Coursera to combine with Udemy

https://investor.coursera.com/news/news-details/2025/Coursera-to-Combine-with-Udemy-to-Empower-th...
326•throwaway019254•8h ago•193 comments

AWS CEO says replacing junior devs with AI is 'one of the dumbest ideas'

https://www.finalroundai.com/blog/aws-ceo-ai-cannot-replace-junior-developers
530•birdculture•4h ago•318 comments

A Safer Container Ecosystem with Docker: Free Docker Hardened Images

https://www.docker.com/blog/docker-hardened-images-for-every-developer/
203•anttiharju•4h ago•47 comments

Inside PostHog: SSRF, ClickHouse SQL Escape and Default Postgres Creds to RCE

https://mdisec.com/inside-posthog-how-ssrf-a-clickhouse-sql-escaping-0day-and-default-postgresql-...
7•arwt•41m ago•0 comments

AI capability isn't humanness

https://research.roundtable.ai/capabilities-humanness/
39•mdahardy•4h ago•36 comments

The State of AI Coding Report 2025

https://www.greptile.com/state-of-ai-coding-2025
40•dakshgupta•4h ago•44 comments

Tell HN: HN was down

377•uyzstvqs•4h ago•238 comments

Zmij: Faster floating point double-to-string conversion

https://vitaut.net/posts/2025/faster-dtoa/
48•fanf2•3d ago•1 comments

Launch HN: Kenobi (YC W22) – Personalize your website for every visitor

23•sarreph•4h ago•38 comments

Notes on Sorted Data

https://amit.prasad.me/blog/sorted-data
46•surprisetalk•6d ago•6 comments

"There are more Japanese [VRChat] creators than all other countries combined "

https://twitter.com/chyadosensei/status/2001356290531156159
19•numpad0•53m ago•3 comments

Doublespeed hacked, revealing what its AI-generated accounts are promoting

https://www.404media.co/hack-reveals-the-a16z-backed-phone-farm-flooding-tiktok-with-ai-influencers/
115•grahamlee•3h ago•56 comments

I couldn't find a logging library that worked for my library, so I made one

https://hackers.pub/@hongminhee/2025/logtape-fedify-case-study
16•todsacerdoti•5d ago•14 comments

I created a publishing system for step-by-step coding guides in Typst

https://press.knowledge.dev/p/new-150-pages-rust-guide-create-a
22•deniskolodin•4d ago•5 comments

AI Isn't Just Spying on You. It's Tricking You into Spending More

https://newrepublic.com/article/204525/artificial-intelligence-consumers-data-dynamic-pricing
9•c420•33m ago•1 comments

Announcing the Beta release of ty

https://astral.sh/blog/ty
790•gavide•1d ago•148 comments

Learning the oldest programming language (2024)

https://uncenter.dev/posts/learning-fortran/
36•lioeters•8h ago•39 comments

No AI* Here – A Response to Mozilla's Next Chapter

https://www.waterfox.com/blog/no-ai-here-response-to-mozilla/
499•MrAlex94•23h ago•275 comments

AI's real superpower: consuming, not creating

https://msanroman.io/blog/ai-consumption-paradigm
188•firefoxd•12h ago•130 comments

Is Mozilla trying hard to kill itself?

https://infosec.press/brunomiguel/is-mozilla-trying-hard-to-kill-itself
758•pabs3•11h ago•670 comments

TLA+ Modeling Tips

http://muratbuffalo.blogspot.com/2025/12/tla-modeling-tips.html
105•birdculture•13h ago•26 comments

Thin desires are eating life

https://www.joanwestenberg.com/thin-desires-are-eating-your-life/
698•mitchbob•1d ago•231 comments

I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in hours

https://simonwillison.net/2025/Dec/15/porting-justhtml/
235•pbowyer•22h ago•127 comments

FCC chair suggests agency isn't independent, word cut from mission statement

https://www.axios.com/2025/12/17/brendan-carr-fcc-independent-senate-testimony-website
117•jmsflknr•3h ago•106 comments

AI will make formal verification go mainstream

https://martin.kleppmann.com/2025/12/08/ai-formal-verification.html
785•evankhoury•1d ago•395 comments
Open in hackernews

AI capability isn't humanness

https://research.roundtable.ai/capabilities-humanness/
38•mdahardy•4h ago

Comments

somewhereoutth•2h ago
Unfortunately a lot of the hype around LLMs is that their capability is humanness, specifically that they are (much) cheaper humans for replacing your expensive and annoying current humans.
bitwize•1h ago
What I think you mean to say is that AI is promoted as fungible with humans at a lower price point.
skydhash•1h ago
I think that's the first time the C suite is so interested in having their employees using a tool regardless of the result. It likes prescribing that you have to send an email twice a day using outlook. And make sure to use attachment both time.
cloflaw•1h ago
> that they are (much) cheaper humans

This is literally their inhumanness.

yannyu•2h ago
One thing I don't understand in these conversations is why we're treating LLMs as if they are completely interchangeable with chatbots/assistants.

A car is not just an engine, it's a drivetrain, a transmission, wheels, steering, all of which affect the end-product and its usability. LLMs are no different, and focusing on alignment without even addressing all the scaffolding that intermediates the exchange between the user and the LLM in an assistant use case seems disingenuous.

ForceBru•1h ago
> Compared to humans, LLMs have effectively unbounded training data. They are trained on billions of text examples covering countless topics, styles, and domains. Their exposure is far broader and more uniform than any human's, and not filtered through lived experience or survival needs.

I think it's the other way round: humans have effectively unbounded training data. We can count exactly how much text any given model saw during training. We know exactly how many images or video frames were used to train it, and so on. Can we count the amount of input humans receive?

I can look at my coffee mug from any angle I want, I can feel it in my hands, I can sniff it, lick it and fiddle with it as much as I want. What happens if I move it away from me? Can I turn it this way, can I lift it up? What does it feel like to drink from this cup? What does it feel like when someone else drinks from my cup? The LLM has no idea because it doesn't have access to sensory data and it can't manipulate real-life objects (yet).

cortesoft•1h ago
Not only that, but humans also have access to all of the "training data" of hundreds of millions of years of evolution baked into our brains.
ACCount37•1h ago
Which must be doing some heavy lifting.

Humans ship with all the priors evolution has managed to cram into them. LLMs have to rediscover all of it from scratch just by looking at an awful lot of data.

layer8•54m ago
I don’t think the amount of data is essential here. The human genome is only around 750 MB, much less than current LLMs, and likely only a small fraction of it determines human intelligence. On the other hand, current LLMs contain immense amounts of factual knowledge that a human newborn carries zero information about.

Intelligence likely doesn’t require that much data, and it may be more a question of evolutionary chance. After all, human intelligence is largely (if not exclusively) the result of natural selection from random mutations, with a generation count that’s likely smaller than the number of training iterations of LLMs. We haven’t found a way yet to artificially develop a digital equivalent effectively, and the way we are training neural networks might actually be a dead end here.

ACCount37•11m ago
That just says "low Kolmogorov complexity". All the priors humans ship with can be represented as a relatively compact algorithm.

Which gives us no information on computational complexity of running that algorithm, or on what it does exactly. Only that it's small.

LLMs don't get that algorithm, so they have to discover certain things the hard way.

emp17344•1h ago
It’s unlikely sensory data contributes to intelligence in human beings. Blind people take in far, far less sensory data than sighted people, and yet are no less intelligent. Think of Helen Keller - she was deafblind from an early age, and yet was far more intelligent than the average person. If your hypothesis is correct, and development of human intelligence is primarily driven by sensory data, how do you reconcile this with our observations of people with sensory impairments?
jakeinspace•57m ago
Blind people tend to have less spatial intelligence though, like significantly more. Not very nice to say like that, and of course they often develop heightened intelligence in other areas, but we do consider human-level spatial reasoning a very important goal in AI.
emp17344•49m ago
People with sensory impairments from birth may be restricted in certain areas, on account of the sensory impairment, but are no less generally cognitively capable than the average person.
erichocean•36m ago
> but are no less generally cognitively capable than the average person

I think this would depend entirely on how the sensory impairment came about, since most genetic problems are not isolated, but carry a bunch of other related problems (all of which can impact intelligence).

Lose your eye sight in an accident? I would grant there is likely no difference on average.

Otherwise, the null hypothesis is that intelligence (and a whole host of other problems) are likely worse, on average.

dpark•52m ago
> It’s unlikely sensory data contributes to intelligence in human beings.

This is clearly untrue. All information a human ever receives is through sensory data. Unless your position is that the intelligence of a brain that was grown in a vat with no inputs would be equivalent to that of a normal person.

Now, does rotating a coffee mug and feeling its weight, seeing it from different angles, etc. improve intelligence? Actually, still yes, if your intelligence test happens to include questions like “is this a picture of a mug” or “which of these objects is closest in weight to a mug”.

emp17344•37m ago
>Unless your position is that the intelligence of a brain that was grown in a vat with no inputs would be equivalent to that of a normal person.

Entirely possible - we just don’t know. The closest thing we have to a real world case study is Helen Keller and other people with significant sensory impairments, who are demonstrably unimpaired in a general cognitive sense, and in many cases more cognitively capable than the average unimpaired person.

moffkalast•57m ago
There's only so much information content you can get from a mug though.

We get a lot of high quality data that's relatively the same. We run the same routines every day, doing more or less the same things, which makes us extremely reliable at what we do but not very worldly.

LLMs get the opposite: sparse, relatively low quality, low modality data that's extremely varied, so they have a much wider breadth of knowledge but they're pretty fragile in comparison since they get relatively little experience on each topic and usually no chance to affirm learning with RL.

mdahardy•46m ago
This is a fair criticism we should've addressed. There's actually a nice study on this: Vong et al. (https://www.science.org/doi/10.1126/science.adi1374) hooked up a camera to a baby's head so it would get all the input data a baby gets. A model trained on this data learned some things babies do (eg word-object mappings), but not everything. However, this model couldn't actively manipulate the world in the way that a baby does and I think this is a big reason why humans can learn so quickly and efficiently.

That said, LLMs are still trained on significantly more data pretty much no matter how you look at it. E.g. a blind child might hear 10-15 million words by age 6 vs. trillions for LLMs.

JohnFen•32m ago
> hooked up a camera to a baby's head so it would get all the input data a baby gets.

A camera hooked up to the baby's head is absolutely not getting all the input data the baby gets. It's not even getting most of it.

omneity•9m ago
While an LLM is trained on trillions of tokens to acquire its capabilities, it does not actively retain or recall the vast majority of it, and often enough is not able to make deductive reasoning either (e.g. X owns Y does not necessarily translate to Y belongs to X).

The acquired knowledge is a lot less uniform than you’re proposing and in fact is full of gaps a human would never make. And more critically, it is not able to peer into all of its vast knowledge at once, so with every prompt what you get is closer to an “instance of a human” than “all of humanity” as you might think of LLMs.

(I train and dissect LLMs for a living and for fun)

lumost•35m ago
A big challenge is that the LLM cannot selectively sample it's training set. You don't forget what a coffee cup looks like just because you only drank water for a week. LLMs on the other hand will catastrophically forget anything in their training set when the training set does not have a uniform distribution of samples in each batch.
zkmon•1h ago
I think there might be a slight bias in this blog article in favor of their product/service. Their human verification service probably needs AI to have less humanness.

But as we saw over the course of recent months or years, AI outputs are becoming more indistinguishable for human output.

mdahardy•43m ago
Our main argument is that outputs will become increasingly indistinguishable, but the processes won't. E.g. in 5 years if you watch an AI book a flight it will do it in a very non-human way, even if it gets the same flight you yourself would book.
layer8•36m ago
If the observable behavior (output) becomes indistinguishable (which I’m doubtful of), what does it matter that the internal process is different? Surely only to the extent that the behavior still exhibits differences after all?
erichocean•33m ago
> in 5 years if you watch an AI book a flight it will do it in a very non-human way

I would bet completely against this, models are becoming more human-like, not less, over time.

What's more likely to change (that would cause a difference) is the work itself changing to adapt to areas where models are already super-human, such as being able to read entire novels in seconds with full attention.

gmuslera•1h ago
LLMs are language models. We interact with them using language, all of that, but also only that. That doesn't mean that they have "common sense", context, same motivations, agency, or even reasoning like us.

But as we interact with other people using mostly language, and since the start of internet a lot of those interactions happen in way similar to how we interact with AI, the difference is not so obvious. We are falling into the Turing test in this, mostly because that test is more about language than about intelligence.

ACCount37•1h ago
"Language" is just the interface. What happens on the inside of LLMs is a lot weirder than that.
lawlessone•56m ago
I feel like the interface in this case has caused us to fool ourselves into thinking there's more there than there is.

Before 2022 (most of history), if you had a long seemingly sensible conversation with something, you could safely assume this other party was a real thinking human mind.

it's like a duck call.

edit, i want to add because this is neural net that's trained to output sensible text, language isn't just the interface.

There's no separation between anything.

freejazz•51m ago
And?
gmuslera•32m ago
What matters is what happen in the outside. We don't know what happen in our inside (or the inside of others, at least), we know the language and how it is used, event the meanings don't have to be the same as long as it is consistent. And you get that by construction. Does that mean intelligence, self consciousness, soul or whatever? We only know that it walk like a duck and quacks like a duck.
danaris•29m ago
"Weirder" does not mean "more complex" or "more human-like".
measurablefunc•26m ago
Which arithmetic operation in an LLM is weird?
ACCount37•7m ago
The fact that you can represent abstract thinking as a big old bag of matrix math sure is.
measurablefunc•6m ago
So it's not weird, it's actually very mundane.
acituan•40m ago
Language is not humanness either; it is a disembodied artifact of our extended cognition, it is a way of transferring the contents of our consciousness to others or to ourselves over time. This is precisely what LLMs piggyback on and therefore are exceedingly good at simulating, which is why the accuracy of "is this human" tools are stuck at %60-70's (%50 is a coin flip), and are going to be bounded for a foreseeable future.

And I am sorry to be negative but there is so much bad cognitive science in this article that I couldn't take the product seriously.

> LLMs can be scaled almost arbitrarily in ways biological brains cannot: more parameters, more training compute, more depth.

- Capacity of raw compute is irrelevant without mentioning the complexity of computation task at hand. LLM's can scale - not infinitely - but they solve for O(n^2) tasks. It is also amiss to think human compute = a singular human's head. Language itself is both a tool and protocol of distributed compute among humans. You borrow a lot of your symbolic preprocessing from culture! Like said, this is exactly what LLM's piggyback on.

> We are constantly hit with a large, continuous stream of sensory input, but we cannot process or store more than a very small part of it.

- This is called relevance, and we are so frigging good at it! The fact that machine has to deal with a lot more unprioritized data in a relatively flat O(n^2) problem formulation is a shortcoming, not a feature. Visual cortex is such an opinionated accelerator of processing all that massive data that only the relevant bits need to make to your consciousness. And this architecture was trained for hundreds of millions of years, over trillions of experiment arms - that were in parallel experimenting on everything else too.

> Humans often have to act quickly. Deliberation is slow, so many decisions rely on fast, heuristic processing. In many situations (danger, social interaction, physical movement), waiting for more evidence simply isn't an option.

- Again a lot of this equivocates conscious processing to entire cognition. Anyone who plays sports or music knows to respect the implicit, embodied cognition that goes on to achieve complex motor tasks. We are yet to see a non-massively-fast-forwarded household robot do a mundane kitchen cleaning task, and go play table tennis with the same motor "cortex". Motor planning and articulation is a fantastically complex computation; just because it doesn't make it to our consciousness or instrumented exclusively through language doesn't mean it is not.

> Human thinking works in a slow, step-by-step way. We pay attention to only a few things at a time, and our memory is limited.

- Thinking, Fast and Slow by Kahneman is a fantastic way of getting into how much more complex the mechanism is.

The key point here is as limited in their recall, how good humans are at relevance, because it matters, because it is existential. Therefore when you are using a tool to extend your recall, it is important to see its limitations. Google search having indexed billions of pages is not a feature if it can't bring the top results well. If it gets the capability to sell me whatever it brought up was relevant, that still doesn't mean the results are actually relevant. And this is exactly the degradation of relevance we are seeing in our culture.

I don't care if the language terminal is a human or a machine, if the human was convinced by the low relevance crap of the machine it just a legitimacy laundering scheme. Therefore this is not a tech problem, it is a problem of culture; we need to be simultaneously cultivating epistemic humility, including quitting the Cartesian tyranny of worshipping explicit verbal cognition that is assumed to be locked up in a brain; we have to accept that we are also embodied and social beings that depend on a lot of distributed compute to solve for agency.