frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•3m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
1•neogoose•6m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
1•mav5431•6m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
1•sizzle•6m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•7m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•8m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
1•vunderba•8m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
1•dangtony98•14m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•22m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•23m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•26m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
2•pabs3•28m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
1•pabs3•29m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•30m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•31m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•35m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•44m ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•48m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•52m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•54m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•1h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
3•ambitious_potat•1h ago•4 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•1h ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1h ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•1h ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1h ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments
Open in hackernews

Stop treating `AGI' as the north-star goal of AI research

https://arxiv.org/abs/2502.03689
46•todsacerdoti•9mo ago

Comments

Der_Einzige•9mo ago
No.
az09mugen•9mo ago
Yes.
ashoeafoot•9mo ago
Introducing the two bit weight! Now you can pack all your uniform greyzones into the variable name. Save memory, process yoir dara faster on smaller chips! We can retrain them we have the technology !
YetAnotherNick•9mo ago
I highly doubt any company are just focusing on AGI, including OpenAI. Else they wouldn't keep on releasing 5 versions of 4o with different "personality".
pixl97•9mo ago
Humans are a general intelligence, yet the vast majority of us have or own personality. Unless you're thinking of a superintelligence that can simulate any personality it wants.
YetAnotherNick•9mo ago
Tweaking the "personality" of LLMs has nothing to do with how smart they are. And using lists or emojis more doesn't make them more intelligent. They just increase the usage as people will like talking to them more.
eli_gottlieb•9mo ago
Google currently has job ads out for post-AGI AI research.
YetAnotherNick•9mo ago
See my post again. I said "just focusing".
belter•9mo ago
AGI persists not because it’s a coherent scientific objective, but because it functions as a lucrative mythology perfectly aligned with VC expectations...Next step...analyze AGI not just as bad science, but as good branding.
tim333•9mo ago
It seems something like a scientific objective. To understand human thinking try making a machine that can do it.
chunkmonke99•9mo ago
Are we sure that is what is happening? Can you really do any meaningful "science" when the subject understudy is a black box that is under a shroud of secrecy? What has been learned from LLMs regarding human cognition and is there broad convergence on that view?
tim333•9mo ago
It's not the main driver of what's happening but it's an aspect of it that goes back a way. For example Turing writing in 1946:

>I am more interested in the possibility of producing models of the action of the brain than in the applications to practical computing...although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model... https://en.wikipedia.org/wiki/Unorganized_machine

chunkmonke99•9mo ago
Oh man I was not aware of this aspect of Turing's work thank you for sharing!!

Honestly, trying to reverse engineering something to understand how it works is interesting and potentially worthwhile! To me it's obvious that "broadly mechanistic" or causal explanations of specific cognitive functions can be created. I am not doubting that a "machine" can mimic human cognitive abilities -insofar as we can state them or "tokenize" them precisely. I am pretty sure that is the whole basis of Cognitive Science.

But just because we can mimic those capacities: does that imply that those are the same mechanisms that exist in nature? Herbert Simon made a distinction between "Natural" and "artificial" system: an LLM's function is to model language (and they do a damn good job of that!) does the brain have one function and what is it? If you build a submarine does that mean it tells you something about how fish swim? Even if it swims faster than any of the fish?

tim333•9mo ago
Building models can help you understand things. Maybe not so much submarines but building model aircraft and studying aerodynamics definitely helps understand how birds fly.

Artificial neural networks are already helping some understanding of brains for example there was a lot of debate about "universal grammar":

>humans possess an innate, biological predisposition for language acquisition, including a "Language Acquisition Device"...

and it now seems to be demonstrated that LLM like neural networks are quite good at picking up language without an 'acquisition device' beyond the general network.

chunkmonke99•9mo ago
That is a fair point. I do not disagree that building (tenuous at best) models of Neurons can help inform science and engineering and vice-versa. Much of "classic" digital signal processing and image processing was an interplay between psychologist, engineers, neuroscientists etc.. So that is very useful! But what it we have here is mistaking the airplane for the bird! My pet Parrot doesn't have an engine! The map is not the territory as it is said.

The point of this thread and the paper isn't that cognition is not an important goal to understand nor that it isn't computational (computation seems to be the best model we currently have). But that AGI is (as the previous comment mentioned) a Marketing term of little scientific value. It is too vague and has the baggage of some religious belief than cold hard scientific inquiry. It used to just be called "AI" or as was being debated at the infancy of the field just "complex information processing". The current for-profit (let's be clear OpenAI is not really a charity) companies don't really actually care about understanding anything ... to an outsider they appear to maximize hype to drum up investment so that they could build a God, while some people get very very rich. To many in these communities, intelligence is some magical quantity that can "solve everything!" I am not sure which part of those beliefs are scientific? Why are we ear marking $100s of billions (some of which is public money) to benefit these companies?

>humans possess an innate, biological predisposition for language acquisition, including a "Language Acquisition Device"...

Would you say that one day someone just happened to find an LLM chilling under the sun and we spoke some words to it for like a few years by pointing to things and one day it was speaking full sentences and asking about the world? Or is it that a lot of engineering work was put into specifically design something for the purpose of generating text ... Do you think humans were designed to speak or to be intelligent and by whom? Can Dolphins, Gorilla's, and Elephants also speak language? They have complex brains with a lot of neurons. Chomsky’s point was just that “If Human then can speak language” so “not human can speak language” doesn’t refute the central point. I am no expert on Chomsky you may know much more about that. But again doesn’t seem relevant to the actual thread.

chunkmonke99•9mo ago
So TLDR: I am not sure we learned a lot about how humans learn language with LLMs: all we learned as that it can be done by "something" but we already knew that. These specific technologies are Products designed to sell things and they need that hype for that. But it doesn't take away from the fact that they are freaking cool!

https://leon.bottou.org/news/two_lessons_from_iclr_2025

pixl97•9mo ago
>Can you really do any meaningful "science" when the subject understudy is a black box that is under a shroud of secrecy?

Are you saying it's impossible to understand human brains?

chunkmonke99•9mo ago
No. I am saying that the broader scientific community probably cannot run experiments on ChatGPT, Claude, or Gemini as they would be able to on say a mouse's brain or even on human subjects with carefully controlled experiments and that can be replicated by 3rd parties.

As for "understanding" you have to be more precise about what you mean: we created LLMs and Transformer based ANNs (and ANNs themselves) and it appears we are all mystified by what they can do ... as though they are magic ... and will lead to Super-intelligence (an even more poorly defined term than regular-ass intelligence).

I'm not trying to be difficult: but I sometimes wonder if all of us were to take a step back and really try and understand this tech before jumping to conclusions! "The thing that was designed to be a universal function approximator approximates the function we trained it to approximate! HOLY CRAP WE MAY HAVE MADE GOD!" It's clear that the the technologies we currently have are miraculous and do amazing things! But are they really doing exactly what humans do? Is it possible to converge at similar destinations without taking the same route? Are we even at the exact same destination?

tim333•9mo ago
People are trying to run experiments on Claude - see https://news.ycombinator.com/item?id=43495617
chunkmonke99•9mo ago
Yes I know of this "study" AFAIK it has not been subjected to peer-review and uses a lot of suggestive language. Other studies have shown that these things use large bags of heuristics which isn't surprising given that are trained on unimaginably large amounts of tokens.

I am not an expert ... but to me anything that is associated with these companies is marketing. I understand that makes me a "stick in the mud" but it's not a crime to be skeptical! THAT SHOULD BE THE DEFAULT ... we used to believe in gods, demons, and monsters. Given that Anthropic is very very closely related to EA and Longtermism and given that this is the "slickest" paper I have ever read ...

If I had the mental capacity to have read a good amount of the internet and millions of pirated books ... I wouldn't be confused by perturbations in questions I have already previously seen.

I am sure there are lots of cogent rebuttals to what I am saying and hey maybe I'm just a sack of meat that is miffed about being replaced by a "superior intelligence" that is "more evolved". But that isn't how evolution works either and it's troubling to see that sentiment becoming to prevalent.

selfhoster11•9mo ago
I'm not sure that there haven't been some things we learnt about cognition or (some) cognition-having entities in general. Whether LLMs inner diction overlaps with how humans do it, we now know more about the subject itself.
bmacho•9mo ago
Is it just me, or this title is gross and annoying up to the point that it's straight up trolling?
tim333•9mo ago
It is kinda. And reading the abstract it's maybe worse.
adityamwagh•9mo ago
Yeah. I also don’t understand why it’s an Arxiv article, rather than a blogpost.
ivape•9mo ago
The talent pool has thinned due to oversaturation, isn't that obvious?
umbra07•9mo ago
Because papers are increasingly written to catch the attention of news publications/blogs/social media instead of professors/academics/researchers.
cwillu•9mo ago
Position papers are not a recent phenomenon.
ivape•9mo ago
We can't.

I'll explain why very simply. The vision of AI and the vision of Virtual Reality all existed well before the technology. We envisioned humanoid robots well before we ever had a chance of making them. We also envisioned an all-knowing AI well before we had our current technology. We will continue to envision the end-state because it is the most natural conclusion. No human can not not imagine the inevitable. That every human, technical or not, has the capacity to fully imagine this future, which means the entirety of the human race will be directed to this forgone conclusion.

Like God and Death (and taxes). shrugs

Smith: It is inevitable Mr. Anderson

Nullabillity•9mo ago
We don't have to build the torment nexus.
yupitsme123•9mo ago
Some marketer decided to call this stuff AI precisely because they wanted to make the connection to those grand visions that you're talking about.

If instead we called them what they are, Large Language Models, would you still say that they were hurtling inevitably towards Generalized Intelligence?

ivape•9mo ago
Yeah.
yupitsme123•9mo ago
Why? How do LLMs amd diffusion models relate to the "vision of AI and the vision of Virtual Reality"