frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The mysterious black fungus from Chernobyl that may eat radiation

https://www.bbc.com/future/article/20251125-the-mysterious-black-fungus-from-chernobyl-that-appea...
121•bookmtn•2h ago•35 comments

Petition to formally recognize open source work as civic service in Germany

https://www.openpetition.de/petition/online/anerkennung-von-open-source-arbeit-als-ehrenamt-in-de...
19•PhilippGille•24m ago•0 comments

Show HN: Glasses to detect smart-glasses that have cameras

https://github.com/NullPxl/banrays
330•nullpxl•8h ago•116 comments

A Tale of Four Fuzzers

https://tigerbeetle.com/blog/2025-11-28-tale-of-four-fuzzers/
33•jorangreef•2h ago•4 comments

Tech Titans Amass Multimillion-Dollar War Chests to Fight AI Regulation

https://www.wsj.com/tech/ai/tech-titans-amass-multimillion-dollar-war-chests-to-fight-ai-regulati...
67•thm•5h ago•76 comments

Pocketbase – open-source realtime back end in 1 file

https://pocketbase.io/
433•modinfo•10h ago•129 comments

Moss: a Rust Linux-compatible kernel in 26,000 lines of code

https://github.com/hexagonal-sun/moss
205•hexagonal-sun•6d ago•41 comments

EU Council Approves New "Chat Control" Mandate Pushing Mass Surveillance

https://reclaimthenet.org/eu-council-approves-new-chat-control-mandate-pushing-mass-surveillance
391•fragebogen•3h ago•221 comments

A Repository with 44 Years of Unix Evolution

https://www.spinellis.gr/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html
44•lioeters•4h ago•9 comments

A Remarkable Assertion from A16Z

https://nealstephenson.substack.com/p/a-remarkable-assertion-from-a16z
89•boplicity•1h ago•34 comments

The three thousand year journey of colchicine

https://www.worksinprogress.news/p/the-three-thousand-year-journey-of
20•quadrin•1w ago•2 comments

A trillion dollars (potentially) wasted on gen-AI

https://garymarcus.substack.com/p/a-trillion-dollars-is-a-terrible
32•flail•1h ago•15 comments

How to make precise sheet metal parts (photochemical machining) [video]

https://www.youtube.com/watch?v=bR9EN3kUlfg
53•surprisetalk•5d ago•5 comments

Tiger Style: Coding philosophy (2024)

https://tigerstyle.dev/
88•nateb2022•9h ago•86 comments

Switzerland: Data Protection Officers Impose Broad Cloud Ban for Authorities

https://www.heise.de/en/news/Switzerland-Data-Protection-Officers-Impose-Broad-Cloud-Ban-for-Auth...
50•TechTechTech•2h ago•22 comments

Open (Apache 2.0) TTS model for streaming conversational audio in realtime

https://github.com/nari-labs/dia2
27•SweetSoftPillow•4d ago•2 comments

Same-day upstream Linux support for Snapdragon 8 Elite Gen 5

https://www.qualcomm.com/developer/blog/2025/10/same-day-snapdragon-8-elite-gen-5-upstream-linux-...
434•mfilion•22h ago•207 comments

Vsora Jotunn-8 5nm European inference chip

https://vsora.com/products/jotunn-8/
146•rdg42•15h ago•51 comments

OS Malevich – how we made a system that embodies the idea of simplicity (2017)

https://www.ajax-systems.uz/blog/hub-os-malevich-story/
8•frxx•4d ago•1 comments

How Charles M Schulz created Charlie Brown and Snoopy (2024)

https://www.bbc.com/culture/article/20241205-how-charles-m-schulz-created-charlie-brown-and-snoopy
154•1659447091•14h ago•72 comments

How to use Linux vsock for fast VM communication

https://popovicu.com/posts/how-to-use-linux-vsock-for-fast-vm-communication/
58•mfrw•9h ago•13 comments

Beads – A memory upgrade for your coding agent

https://github.com/steveyegge/beads
80•latchkey•9h ago•40 comments

A fast EDN (Extensible Data Notation) reader written in C11 with SIMD boost

https://github.com/DotFox/edn.c
94•delaguardo•4d ago•35 comments

GitLab discovers widespread NPM supply chain attack

https://about.gitlab.com/blog/gitlab-discovers-widespread-npm-supply-chain-attack/
289•OuterVale•22h ago•156 comments

SQLite as an Application File Format

https://sqlite.org/appfileformat.html
21•gjvc•6h ago•3 comments

Implementing Bluetooth LE Audio and Auracast on Linux Systems

https://www.collabora.com/news-and-blog/blog/2025/11/24/implementing-bluetooth-le-audio-and-aurac...
97•losgehts•3d ago•3 comments

Cats became our companions way later than you think

https://www.bbc.co.uk/news/articles/cq8dvdp9gn7o
33•n1b0m•3h ago•29 comments

250MWh 'Sand Battery' to start construction in Finland

https://www.energy-storage.news/250mwh-sand-battery-to-start-construction-in-finland-for-both-hea...
296•doener•15h ago•217 comments

Africa's forests have switched from absorbing to emitting carbon

https://phys.org/news/2025-11-africa-forests-absorbing-emitting-carbon.html
42•pseudolus•3h ago•16 comments

A programmer-friendly I/O abstraction over io_uring and kqueue (2022)

https://tigerbeetle.com/blog/2022-11-23-a-friendly-abstraction-over-iouring-and-kqueue/
103•enz•15h ago•32 comments
Open in hackernews

A trillion dollars (potentially) wasted on gen-AI

https://garymarcus.substack.com/p/a-trillion-dollars-is-a-terrible
32•flail•1h ago

Comments

naveen99•30m ago
When it comes to machine learning, research has consistently shown, that pretty much the only thing that matters is scaling.

Ilya should just enjoy his billions raised with no strings.

philipwhiuk•24m ago
> When it comes to machine learning, research has consistently shown, that pretty much the only thing that matters is scaling.

Yes, indeed, that is why all we have done since the 90s is scale up the 'expert systems' we invented ...

That's such an a-historic take it's crazy.

* 1966: failure of machine translation

* 1969: criticism of perceptrons (early, single-layer artificial neural networks)

* 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University

* 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report

* 1973–74: DARPA's cutbacks to academic AI research in general

* 1987: collapse of the LISP machine market

* 1988: cancellation of new spending on AI by the Strategic Computing Initiative

* 1990s: many expert systems were abandoned

* 1990s: end of the Fifth Generation computer project's original goals

Time and time again, we have seen that each academic research begets a degree of progress, improved by the application of hardware and money, but ultimately only a step towards AGI, which ends with a realisation that there's a missing congitive ability that can't be overcome by absurd compute.

LLMs are not the final step.

bbor•2m ago
Well, expert systems aren’t machine learning, they’re symbolic. You mention perceptrons, but that timeline is proof for the power of scaling, not against — they didn’t start to really work until we built giant computers in the ~90s, and have been revolutionizing the field ever since.
CuriouslyC•5m ago
If you think scaling is all that matters, you need to learn more about ML.

Read about the the No Free Lunch Theorem. Basically, the reason we need to "scale" so hard is because we're building models that we want to be good at everything. We could build models that are as good at LLMs at a narrow fraction of tasks we ask of them to do, at probably 1/10th the parameters.

an0malous•5m ago
Didn’t OpenAI themselves publish a papers years ago that scaling parameters has diminishing returns?
ComplexSystems•29m ago
I think the article makes decent points but I don't agree with the general conclusion here, which is that all of this investment is wasted unless it "reaches AGI." Maybe it isn't necessary for every single dollar we spend on AI/LLM products and services to go exclusively toward the goal of "reaching AGI?" Perhaps it's alright if these dollars instead go to building out useful services and applications based on the LLM technologies we already have.

The author, for whatever reason, views it as a foregone conclusion that every dollar spent in this way is a waste of time and resources, but I wouldn't view any of that as wasted investment at all. It isn't any different from any other trend - by this logic, we may as well view the cloud/SaaS craze of the last decade as a waste of time. After all, the last decade was also fueled by lots of unprofitable companies, speculative investment and so on, and failed to reach any pie-in-the-sky Renaissance-level civilization-altering outcome. Was it all a waste of time?

It's ultimately just another thing industry is doing as demand keeps evolving. There is demand for building the current AI stack out, and demand for improving it. None of it seems wasted.

robot-wrangler•14m ago
It's not about "every dollar spent" being a waste of time, it's about acknowledging the reality of opportunity cost. Of course, no one in any movement is likely to listen to their detractors, but in this case the pioneers seem to agree.

https://www.youtube.com/watch?v=DtePicx_kFY https://www.bbc.com/news/articles/cy7e7mj0jmro

ComplexSystems•3m ago
I think there is broad agreement that new models and architectures are needed, but I don't see it as a waste to also scale the stack that we currently have. That's what Silicon Valley has been doing for the past 50 years - scaling things out while inventing the next set of things - and I don't see this as any different. Maybe current architectures will go the way of the floppy disk, but it wasn't a waste to scale up production of floppy disk drives while they were relevant. And ChatGPT was still released only 3 years ago!
roenxi•24m ago
Just because something didn't work out doesn't mean it was a waste, and it isn't particularly clear that the the LLM boom was wasted, or that it is over, or that it isn't working. I can't figure out what people mean when they say "AGI" any more, we appear to be past that. We've got something that seems to be general and seems to be more intelligent than an average human. Apparently AGI means a sort of Einstein-Tolstoy-Jesus hybrid that can ride a unicycle and is far beyond the reach of most people I know.

Also, if anyone wants to know what a real effort to waste a trillion dollars can buy ... https://costsofwar.watson.brown.edu/

embedding-shape•18m ago
> Just because something didn't work out doesn't mean it was a waste

One thing to keep in mind, is that most of these people who go around spreading unfounded criticism of LLMs, "Gen-AI" and just generally AI aren't usually very deep into understanding computer science, and even less science itself. In their mind, if someone does an experiment, and it doesn't pan out, they'll assume that means "science itself failed", because they literally don't know how research and science work in practice.

bbor•6m ago
Maybe true in general, but Gary Marcus is an experienced researcher and entrepreneur who’s been writing about AI for literally decades.

I’m quite critical, but I think we have to grant that he has plenty of credentials and understands the technical nature of what he’s critiquing quite well!

austin-cheney•5m ago
> Just because something didn't work out doesn't mean it was a waste

Its all about scale.

If you spend $100 on something that didn't work out that money wasn't wasted if you learned something amazing. If you spend $1,000,000,000,000 on something that didn't work out the expectation is that you learn something close to 1,000,000,000x more than the $100 spend. If the value of learning is several orders of magnitude less than the level of investment there is absolutely tremendous waste.

For example: nobody qualifies spending a billion dollars on a failed project as value if your learning only resulted in avoiding future paper cuts.

mensetmanusman•8m ago
I’m glad the 0.01% have something to burn their money on.
teraflop•2m ago
It would be nice if they could burn it on something that didn't require them to buy up the world's supply of DDR5 RAM, and triple prices for everyone else.

https://pcpartpicker.com/trends/price/memory/

bbor•8m ago
I always love a Marcus hot take, but this one is more infuriating than usual. He’s taking all these prominent engineers saying “we need new techniques to build upon the massive, unexpected success we’ve had”, twisting it into “LLMs were never a success and sucked all along”, and listing them alongside people that no one should be taking seriously — namely, Emily Bender and Ed Zitron.

Of course, he includes enough weasel phrases that you could never nail him down on any particular negative sentiment; LLMs aren’t bad, they just need to be “complemented”. But even if we didn’t have context, the whole thesis of the piece runs completely counter to this — you don’t “waste” a trillion dollars on something that just needs to be complemented!

FWIW, I totally agree with his more mundane philosophical points about the need to finally unify the work of the Scruffies and the Neats. The problem is that he frames it like some rare insight that he and his fellow rebels found, rather than something that was being articulated in depth by one of the fields main leaders 35 years ago[1]. Every one of the tens of thousands of people currently working on “agential” AI knows it too, even if they don’t have the academic background to articulate it.

I look forward to the day when Mr. Marcus can feel like he’s sufficiently won, and thus get back to collaborating with the rest of us… This level of vitriolic, sustained cynicism is just antithetical to the scientific method at this point. It is a social practice, after all!

[1] https://www.mit.edu/~dxh/marvin/web.media.mit.edu/~minsky/pa...