frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•31s ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•50s ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•1m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•2m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•5m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•5m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•6m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•6m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•7m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•7m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•9m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•9m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•10m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•11m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•12m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•16m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•16m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•17m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•21m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•21m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•22m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•25m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•25m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•25m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•25m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•26m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•29m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•29m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•29m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•31m ago•0 comments
Open in hackernews

A formal proof that AI-by-Learning is intractable

https://link.springer.com/article/10.1007/s42113-024-00217-5#appendices
14•birttAdenors•5mo ago

Comments

falcor84•5mo ago
The proof seems sound, but the premises appear to me to be overly restrictive. In particular, seeing that a ML-based AI can write arbitrary code, there's nothing limiting the ability of these 2nd generation AIs from being AGI.
vidarh•5mo ago
If such 2nd generation AI's are AGI, then their claim that "as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable" is false.

Indeed, if their proof is true, they have a proof that the Church-Turing thesis is false, and that humans exceed the Turing computable, in which case they've upended the field of computational logic.

Yet they assert that they believe a Turing complete system "is expressive enough to computationally capture human cognition".

This would be a very silly belief to hold if their proof is true, as if that claim is true, then a human brain is existence proof that a computational device that can produce output identical to human cognition is possible, because a human brain would be one. They'd then need to explain why they think an equally powerful computational device can't be artificially created.

If they want to advance a claim like this, they need to address this issue. Not only does their paper not address it, but they make assertions about Turing-equivalence that are false if their conclusions are true, which suggest they don't even understand the implications.

Indeed, if they understood the implications of their claim, then a claim to have proven the Church-Turing thesis to be false and/or having proven that humans exceed the Turing computable ought to be front and center, as it'd be a far more significant finding than the one they claim.

The paper is frankly an embarrassment.

vidarh•5mo ago
> Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable

This would involve proving that humans exceed the Turing computable, which would mean proving Church-Turing thesis is false.

Because if humans do not exceed the Turing computable, then every single human brain is existence-proof that AGI is intrinsically computationally tractable by demonstrating that sufficient calculations can be done in a small enough computational device, and that the creation of such a device is possible.

Their paper accepts as true that Turing-completeness is sufficient to "computationally capture human cognition".

If we postulate that this is true (and we have no evidence to suggest it is not), then if their "proof" shows that *their chosen mechanism can be proven to not allow for the creation of AGI, then all they have demonstrated is that their assumptions are wrong.

benreesman•5mo ago
I'd like to "reclaim" both AI and machine learning as relatively emotionally neutral terms of art for useful software we have today or see a clearly articulated path towards.

Trying to get the most out of tools that sit somewhere between "the killer robots will eradicate humanity", " there goes my entire career", "fuck that guy with the skill I don't want to develop, let's take his career", and "I'm going to be so fucking rich if we can keep the wheels on this" is exhausting.

And the cognitive science thing.

neutronicus•5mo ago
I don't think that's achievable with all the science fiction surrounding "AI" specifically. You wouldn't be "reclaiming" the term, you'd be conquering an established cultural principality of emotionally-resonant science fiction.

Which is, of course, the precise reason why stakeholders are so insistent on using "AI" and "LLM" interchangeably.

Personally I think the only reasonable way to get us out of that psycho-linguistic space is just say "LLMs" and "LLM agents" when that's what we mean (am I leaving out some constellation of SotA technology? no, right?)

benreesman•5mo ago
I personally regard posterior/score-gradient/flow-match style models as the most interesting thing going on right now, ranging from rich media diffusers (the extended `SDXL` family tree which is now MMDiT and other heavy transformer stuff rapidly absorbing all of 2024's `LLM` tune ups) all the way through to protein discovery and other medical applications (tomography, it's a huge world).

LLM's are very useful, but they're into the asymptote of expert-labeling and other data-bounded stuff (God knows why the GB200-style Blackwell build-out is looking like a trillion bucks when Hopper is idle all over the world and we don't have a second Internet to pretrain a bigger RoPE/RMSNorm/CQA/MLA mixture GPT than the ones we already have).

akoboldfrying•5mo ago
> Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.

Well, pregnant women create such systems routinely.

Due to the presence of the weasel word "factual" (it's not in the sentence I quoted, but is in the lead-up), no contradiction actually arises. It may well be intractable to create a perfectly factual human(-like or -level) AI -- but then, most of us would find much utility in a human(-like or -level) AI that is only factual most of the time -- IOW, a human(-like or -level) AI.

vidarh•5mo ago
But a "perfectly factual" AI wouldn't be human-like at all, and notably they appear to have actually tried to define what a "factual AI system" would mean.