frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Clay Christensen's Milkshake Marketing (2011)

https://www.library.hbs.edu/working-knowledge/clay-christensens-milkshake-marketing
2•vismit2000•3m ago•0 comments

Show HN: WeaveMind – AI Workflows with human-in-the-loop

https://weavemind.ai
3•quentin101010•8m ago•1 comments

Show HN: Seedream 5.0: free AI image generator that claims strong text rendering

https://seedream5ai.org
1•dallen97•10m ago•0 comments

A contributor trust management system based on explicit vouches

https://github.com/mitchellh/vouch
2•admp•12m ago•1 comments

Show HN: Analyzing 9 years of HN side projects that reached $500/month

2•haileyzhou•12m ago•0 comments

The Floating Dock for Developers

https://snap-dock.co
2•OsamaJaber•14m ago•0 comments

Arcan Explained – A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
2•walterbell•15m ago•0 comments

We are not scared of AI, we are scared of irrelevance

https://adlrocha.substack.com/p/adlrocha-we-are-not-scared-of-ai
1•adlrocha•16m ago•0 comments

Quartz Crystals

https://www.pa3fwm.nl/technotes/tn13a.html
1•gtsnexp•18m ago•0 comments

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
2•suvankar_m•21m ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
1•xipz•22m ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•26m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
2•baruchel•28m ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
2•birdculture•29m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•30m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
2•raleobob•34m ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•34m ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
1•jingkai_he•37m ago•0 comments

Show HN: A2A Protocol – Infrastructure for an Agent-to-Agent Economy

1•swimmingkiim•41m ago•1 comments

Drinking More Water Can Boost Your Energy

https://www.verywellhealth.com/can-drinking-water-boost-energy-11891522
1•wjb3•44m ago•0 comments

Proving Laderman's 3x3 Matrix Multiplication Is Locally Optimal via SMT Solvers

https://zenodo.org/records/18514533
1•DarenWatson•47m ago•0 comments

Fire may have altered human DNA

https://www.popsci.com/science/fire-alter-human-dna/
4•wjb3•47m ago•2 comments

"Compiled" Specs

https://deepclause.substack.com/p/compiled-specs
1•schmuhblaster•52m ago•0 comments

The Next Big Language (2007) by Steve Yegge

https://steve-yegge.blogspot.com/2007/02/next-big-language.html?2026
1•cryptoz•53m ago•0 comments

Open-Weight Models Are Getting Serious: GLM 4.7 vs. MiniMax M2.1

https://blog.kilo.ai/p/open-weight-models-are-getting-serious
4•ms7892•1h ago•0 comments

Using AI for Code Reviews: What Works, What Doesn't, and Why

https://entelligence.ai/blogs/entelligence-ai-in-cli
3•Arindam1729•1h ago•0 comments

Show HN: Solnix – an early-stage experimental programming language

https://www.solnix-lang.org/
3•maheshbhatiya•1h ago•0 comments

DoNotNotify is now Open Source

https://donotnotify.com/opensource.html
5•awaaz•1h ago•2 comments

The British Empire's Brothels

https://www.historytoday.com/archive/feature/british-empires-brothels
2•pepys•1h ago•0 comments

What rare disease AI teaches us about longitudinal health

https://myaether.live/blog/what-rare-disease-ai-teaches-us-about-longitudinal-health
2•takmak007•1h ago•0 comments
Open in hackernews

A formal proof that AI-by-Learning is intractable

https://link.springer.com/article/10.1007/s42113-024-00217-5#appendices
14•birttAdenors•5mo ago

Comments

falcor84•5mo ago
The proof seems sound, but the premises appear to me to be overly restrictive. In particular, seeing that a ML-based AI can write arbitrary code, there's nothing limiting the ability of these 2nd generation AIs from being AGI.
vidarh•5mo ago
If such 2nd generation AI's are AGI, then their claim that "as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable" is false.

Indeed, if their proof is true, they have a proof that the Church-Turing thesis is false, and that humans exceed the Turing computable, in which case they've upended the field of computational logic.

Yet they assert that they believe a Turing complete system "is expressive enough to computationally capture human cognition".

This would be a very silly belief to hold if their proof is true, as if that claim is true, then a human brain is existence proof that a computational device that can produce output identical to human cognition is possible, because a human brain would be one. They'd then need to explain why they think an equally powerful computational device can't be artificially created.

If they want to advance a claim like this, they need to address this issue. Not only does their paper not address it, but they make assertions about Turing-equivalence that are false if their conclusions are true, which suggest they don't even understand the implications.

Indeed, if they understood the implications of their claim, then a claim to have proven the Church-Turing thesis to be false and/or having proven that humans exceed the Turing computable ought to be front and center, as it'd be a far more significant finding than the one they claim.

The paper is frankly an embarrassment.

vidarh•5mo ago
> Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable

This would involve proving that humans exceed the Turing computable, which would mean proving Church-Turing thesis is false.

Because if humans do not exceed the Turing computable, then every single human brain is existence-proof that AGI is intrinsically computationally tractable by demonstrating that sufficient calculations can be done in a small enough computational device, and that the creation of such a device is possible.

Their paper accepts as true that Turing-completeness is sufficient to "computationally capture human cognition".

If we postulate that this is true (and we have no evidence to suggest it is not), then if their "proof" shows that *their chosen mechanism can be proven to not allow for the creation of AGI, then all they have demonstrated is that their assumptions are wrong.

benreesman•5mo ago
I'd like to "reclaim" both AI and machine learning as relatively emotionally neutral terms of art for useful software we have today or see a clearly articulated path towards.

Trying to get the most out of tools that sit somewhere between "the killer robots will eradicate humanity", " there goes my entire career", "fuck that guy with the skill I don't want to develop, let's take his career", and "I'm going to be so fucking rich if we can keep the wheels on this" is exhausting.

And the cognitive science thing.

neutronicus•5mo ago
I don't think that's achievable with all the science fiction surrounding "AI" specifically. You wouldn't be "reclaiming" the term, you'd be conquering an established cultural principality of emotionally-resonant science fiction.

Which is, of course, the precise reason why stakeholders are so insistent on using "AI" and "LLM" interchangeably.

Personally I think the only reasonable way to get us out of that psycho-linguistic space is just say "LLMs" and "LLM agents" when that's what we mean (am I leaving out some constellation of SotA technology? no, right?)

benreesman•5mo ago
I personally regard posterior/score-gradient/flow-match style models as the most interesting thing going on right now, ranging from rich media diffusers (the extended `SDXL` family tree which is now MMDiT and other heavy transformer stuff rapidly absorbing all of 2024's `LLM` tune ups) all the way through to protein discovery and other medical applications (tomography, it's a huge world).

LLM's are very useful, but they're into the asymptote of expert-labeling and other data-bounded stuff (God knows why the GB200-style Blackwell build-out is looking like a trillion bucks when Hopper is idle all over the world and we don't have a second Internet to pretrain a bigger RoPE/RMSNorm/CQA/MLA mixture GPT than the ones we already have).

akoboldfrying•5mo ago
> Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.

Well, pregnant women create such systems routinely.

Due to the presence of the weasel word "factual" (it's not in the sentence I quoted, but is in the lead-up), no contradiction actually arises. It may well be intractable to create a perfectly factual human(-like or -level) AI -- but then, most of us would find much utility in a human(-like or -level) AI that is only factual most of the time -- IOW, a human(-like or -level) AI.

vidarh•5mo ago
But a "perfectly factual" AI wouldn't be human-like at all, and notably they appear to have actually tried to define what a "factual AI system" would mean.