frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Article by article, how Big Tech shaped the EU's roll-back of digital rights

https://corporateeurope.org/en/2026/01/article-article-how-big-tech-shaped-eus-roll-back-digital-...
119•robtherobber•57m ago•13 comments

Radboud University selects Fairphone as standard smartphone for employees

https://www.ru.nl/en/staff/news/radboud-university-selects-fairphone-as-standard-smartphone-for-e...
284•ardentsword•5h ago•134 comments

Vm0

https://github.com/vm0-ai/vm0
41•handfuloflight•4d ago•7 comments

Nuclear elements detected in West Philippine Sea

https://www.philstar.com/headlines/2026/01/18/2501750/nuclear-elements-detected-west-philippine-sea
39•ksec•3h ago•15 comments

Amazon is ending all inventory commingling as of March 31, 2026

https://twitter.com/ghhughes/status/2012824754319753456
128•MrBuddyCasino•1h ago•64 comments

A decentralized peer-to-peer messaging application that operates over Bluetooth

https://bitchat.free/
300•no_creativity_•6h ago•174 comments

Two Concepts of Intelligence

https://cacm.acm.org/blogcacm/two-concepts-of-intelligence/
32•1970-01-01•5d ago•28 comments

Ask HN: COBOL devs, how are AI coding affecting your work?

25•zkid18•45m ago•6 comments

Gaussian Splatting – A$AP Rocky "Helicopter" music video

https://radiancefields.com/a-ap-rocky-releases-helicopter-music-video-featuring-gaussian-splatting
685•ChrisArchitect•20h ago•218 comments

Nepal's Mountainside Teahouses Elevate the Experience for Trekkers

https://www.smithsonianmag.com/travel/nepal-mountainside-teahouses-elevate-experience-trekkers-he...
46•bookofjoe•4d ago•13 comments

Wikipedia: WikiProject AI Cleanup

https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup
137•thinkingemote•3h ago•50 comments

Fire Shuts GTA 6 Developer Rockstar North, Following Report of Explosion

https://www.ign.com/articles/fire-shuts-gta-6-developer-rockstar-north-following-report-of-explosion
19•finnlab•38m ago•8 comments

Dead Internet Theory

https://kudmitry.com/articles/dead-internet-theory/
419•skwee357•17h ago•498 comments

Show HN: I quit coding years ago. AI brought me back

https://calquio.com/finance/compound-interest
192•ivcatcher•13h ago•251 comments

Flux 2 Klein pure C inference

https://github.com/antirez/flux2.c
367•antirez•19h ago•126 comments

Provide agents with automated feedback

https://banay.me/dont-waste-your-backpressure/
148•ghuntley•2d ago•73 comments

A Social Filesystem

https://overreacted.io/a-social-filesystem/
455•icy•1d ago•200 comments

Fluid Gears Rotate Without Teeth

https://phys.org/news/2026-01-fluid-gears-rotate-teeth-mechanical.html
19•vlachen•4d ago•34 comments

Gladys West's vital contributions to GPS technology

https://en.wikipedia.org/wiki/Gladys_West
37•hackernj•2d ago•3 comments

AVX-512: First Impressions on Performance and Programmability

https://shihab-shahriar.github.io//blog/2026/AVX-512-First-Impressions-on-Performance-and-Program...
86•shihab•5d ago•34 comments

Fil-Qt: A Qt Base build with Fil-C experience

https://git.qt.io/cradam/fil-qt
126•pjmlp•3d ago•82 comments

The Code-Only Agent

https://rijnard.com/blog/the-code-only-agent
100•emersonmacro•11h ago•43 comments

RISC-V is coming along quite speedily: Milk-V Titan Mini-ITX 8-core board

https://www.tomshardware.com/pc-components/cpus/milk-v-titan-mini-ix-board-with-ur-dp1000-process...
38•fork-bomber•3h ago•9 comments

Gas Town Decoded

https://www.alilleybrinker.com/mini/gas-town-decoded/
163•alilleybrinker•4d ago•153 comments

Greenpeace pilot brings heat pumps and solar to Ukrainian community

https://www.pveurope.eu/power2heat/greenpeace-pilot-brings-heat-pumps-and-solar-ukrainian-community
41•doener•4h ago•30 comments

Self Sanitizing Door Handle

https://www.jamesdysonaward.org/en-US/2019/project/self-sanitizing-door-handle/
30•rendaw•3d ago•33 comments

Simulating the Ladybug Clock Puzzle

https://austinhenley.com/blog/ladybugclock.html
38•azhenley•1d ago•8 comments

Astrophotography visibility plotting and planning tool

https://airmass.org/
42•NKosmatos•3d ago•5 comments

Using proxies to hide secrets from Claude Code

https://www.joinformal.com/blog/using-proxies-to-hide-secrets-from-claude-code/
105•drewgregory•5d ago•35 comments

High-speed train collision in Spain kills at least 39

https://www.bbc.com/news/articles/cedw6ylpynyo
195•akyuu•13h ago•172 comments
Open in hackernews

Two Concepts of Intelligence

https://cacm.acm.org/blogcacm/two-concepts-of-intelligence/
32•1970-01-01•5d ago

Comments

barishnamazov•1h ago
The turkey is fed by the farmer every morning at 9 AM.

Day 1: Fed. (Inductive confidence rises)

Day 100: Fed. (Inductive confidence is near 100%)

Day 250: The farmer comes at 9 AM... and cuts its throat. Happy thanksgiving.

The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.

This is why Meyer's "American/Inductive" view is dangerous for critical software. An LLM coding agent is the Inductive Turkey example. It writes perfect code for 1000 days because the tasks match the training data. On Day 1001, you ask for something slightly out of distribution, and it confidently deletes your production database because it added a piece of code that cleans your tables.

Humans are inductive machines, for the most part, too. The difference is that, fortunately, fine-tuning them is extremely easy.

usgroup•1h ago
This issue happens at the edge of every induction. These two rules support their data equally well:

data: T T T T T T F

rule1: for all i: T

rule2: for i < 7: T else F

p-e-w•59m ago
That’s where Bayesian reasoning comes into play, where there are prior assumptions (e.g., that engineered reality is strongly biased towards simple patterns) which make one of these hypotheses much more likely than the other.
usgroup•31m ago
yes, if you decide one of them is much more likely without reference to the data, then it will be much more likely :)
p-e-w•1h ago
> The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.

But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.

Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.

[1] https://arxiv.org/abs/2301.02679

encyclopedism•8m ago
LLM's have surpassed being Turing machines? Turing machines now think?

LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.

barishnamazov•6m ago
I'm a believer that LLMs will keep getting better. But even today (which might or might not be "sufficient" training) they can easily run `rm -rf ~`.

Not that humans can't make these mistakes (in fact, I have nuked my home directory myself before), but I don't think it's a specific problem some guardrails can solve currently. I'm looking for innovations (either model-wise or engineering-wise) that'd do better than letting an agent run code until a goal is seemingly achieved.

glemion43•48m ago
You clearly underestimate the quality of people I have seen and worked with. And yes guard rails can be added easily.

Security is my only concern and for that we have a team doing only this but that's also just a question of time.

Whatever LLMs ca do today doesn't matter. It matters how fast it progresses and we will see if we still use LLMs in 5 years or agi or some kind of world models.

bdbdbdb•28m ago
> You clearly underestimate the quality of people I have seen and worked with

"Humans aren't perfect"

This argument always comes up. The existence of stupid / careless / illiterate people in the workplace doesn't excuse spending trillions on computer systems which use more energy than entire countries and are yet unreliable

naveen99•44m ago
LLM’s seem to know about farmers and turkeys though.
macleginn•1h ago
Marx is fair play, but one of the most prominent cases of understanding everything in advance is undoubtedly Chomsky's theory of innate/universal grammar, which became completely dominant on guess which side of the pond.
ghgr•1h ago
I agree with Dijkstra on this one: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
tucnak•57m ago
I really wish all these LessWrong, what is the meaning of intelligence types cared enough to study Wittgenstein a bit rather than hear themselves talk; it would save us all a lot of time.
encyclopedism•6m ago
I fully agree with your sentiments. People really need to study a little!
svilen_dobrev•1h ago
intelligence/understanding is when one can postulate/predict/calculate/presume something correctly, from concepts about it, without that thing (or similar) ever been in the training/past (or even ever-known).

Yeah, not all humans do it. It's too energy expensive, biological efficiency wins.

As of ML.. Maybe next time, when someone figures out how to combine deductive with inductive, in zillion small steps, with falsifying built-in.. (instead of confronting them 100% one against 100% the other)

notarobot123•1h ago
Memory foam doesn't really "remember" the shape of my rear end but we all understand the language games at play when we use that term.

The problem with the AI discourse is that the language games are all mixed up and confused. We're not just talking about capability, we're talking about significance too.

sebastianmestre•1h ago
This is kind of bait-and-switch, no?

The author defines American style intelligence as "the ability to adapt to new situations, and learn from experience".

Then argues that the current type of machine-learning driven AI is American style-intelligent because it is inductive, which is not what was supposedly (?) being argued for.

Of course current AI/ML models cannot adapt to new situations and learn from experience, outside the scope of its context window, without a retraining or fine-tuning step.

anonymous908213•43m ago
Two concepts of intelligence and neither have remotely anything to do with real intelligence, academics sure like to play with words. I suppose this is how they justify their own existence; in the absence of being intelligent enough to contribute anything of value, they must instead engage in wordplay that obfuscates the meaning of words to the point nobody understands what the hell they're talking about, and confuses the lack of understanding of what they're talking about for the academics being more intelligent than the reader.

Intelligence, in the real world, is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call. How is it that we can go about ignoring reality for so long?

bdbdbdb•35m ago
I keep coming back to this. The most recent version of chatgpt I tried was able to tell me how many letter 'r's were in a very long string of characters only by writing and executing a python script to do this. Some people say this is impressive, but any 5 year old could count the letters without knowing any python.
williamcotton•12m ago
How is counting not a technology?

The calculations are internal but they happen due to the orchestration of specific parts of the brain. That is to ask, why can't we consider our brains to be using their own internal tools?

I certainly don't think about multiplying two-digit numbers in my head in the same manner as when playing a Dm to a G7 chord that begs to resolve to a C!

satisfice•34m ago
Intelligence is not just about reasoning with logic. Computers are already made to do that.

The key thing is modeling. You must model a situation in a useful way in order to apply logic to it. And then there is intention, which guides the process.

anonymous908213•10m ago
Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
anonymous908213•32m ago
Addendum:

> With recent advances in AI, it becomes ever harder for proponents of intelligence-as-understanding to continue asserting that those tools have no clue and “just” perform statistical next-token prediction.

??????? No, that is still exactly what they do. The article then lists a bunch of examples in which this in trivially exactly what is happening.

> “The cat chased the . . .” (multiple connections are plausible, so how is that not understanding probability?)

It doesn't need to "understand" probability. "The cat chased the mouse" shows up in the distribution 10 times. "The cat chased the bird" shows up in the distribution 5 times. Absent any other context, with the simplest possible model, it now has a probability of 2/3 for the mouse and 1/3 for the bird. You can make the probability calculations as complex as you want, but how could you possibly trout this out as an example that an LLM completing this sentence isn't a matter of trivial statistical prediction? Academia needs an asteroid, holy hell.

[I originally edited this into my post, but two people had replied by then, so I've split it off into its own comment.]

n4r9•19m ago
One question is how do you know that you (or humans in general) aren't also just applying statistical language rules, but are convincing yourself of some underlying narrative involving logical rules? I don't know the answer to this.
djoldman•24m ago
Many people would require an intelligent entity to successfully complete tasks with non-deterministic outputs.
messe•4m ago
> Probabilistic prediction is inherently incompatible with deterministic deduction

Prove that humans do it.

bsenftner•26m ago
Intelligence as described is not the entire "requirement" for intelligence. There are probably more layers here, but I see "intelligence" as the 2nd layer, and beneath that later is comprehension which is the ability to discriminate between similar things, even things trying to decieve you. And at layer zero is the giant mechanism pushing this layered form of intelligence found in living things is the predator / prey dynamic that dictates being alive or food for something remaining alive.

"Intelligence in AI" lacks any existential dynamic, our LLMs are literally linguistic mirrors of human literature and activity tracks. They are not intelligent, but for the most part we can imagine they are, while maintaining sharp critical analysis because they are idiot savants in the truest sense.

yogthos•17m ago
I'd argue you can have much more precise definition than that. My definition of intelligence would be a system that has an internal of a particular domain, and it uses this simulation to guide its actions within that domain. Being able to explain your actions is derived directly from having a model of the environment.

For example, we all have an internal physics model in our heads that's build up through our continuous interaction with our environment. That acts as our shared context. That's why if I tell you to bring me a cup of tea, I have a reasonable expectation that you understand what I requested and can execute this action intelligently. You have a conception of a table, of a cup, of tea, and critically our conception is similar enough that we can both be reasonably sure we understand each other.

Incidentally, when humans end up talking about abstract topics, they often run into exact same problem as LLMs, where the context is missing and we can be talking past each other.

The key problem with LLMs is that they currently lack this reinforcement loop. The system merely strings tokens together in a statistically likely fashion, but it doesn't really have a model of the domain it's working in to anchor them to.

In my opinion, stuff like agentic coding or embodiment with robotics moves us towards genuine intelligence. Here we have AI systems that have to interact with the world, and they get feedback on when they do things wrong, so they can adjust their behavior based on that.