frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I failed to recreate the 1996 Space Jam website with Claude

https://j0nah.com/i-failed-to-recreate-the-1996-space-jam-website-with-claude/
358•thecr0w•11h ago•287 comments

Bag of words, have mercy on us

https://www.experimental-history.com/p/bag-of-words-have-mercy-on-us
83•ntnbr•5h ago•74 comments

Mechanical power generation using Earth's ambient radiation

https://www.science.org/doi/10.1126/sciadv.adw6833
75•defrost•6h ago•26 comments

Dollar-stores overcharge customers while promising low prices

https://www.theguardian.com/us-news/2025/dec/03/customers-pay-more-rising-dollar-store-costs
301•bookofjoe•13h ago•457 comments

The C++ standard for the F-35 Fighter Jet [video]

https://www.youtube.com/watch?v=Gv4sDL9Ljww
213•AareyBaba•10h ago•208 comments

Google Titans architecture, helping AI have long-term memory

https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/
424•Alifatisk•15h ago•143 comments

Uninitialized garbage on ia64 can be deadly (2004)

https://devblogs.microsoft.com/oldnewthing/20040119-00/?p=41003
43•HeliumHydride•3d ago•15 comments

The era of jobs is ending

https://www.thepavement.xyz/p/the-era-of-jobs-is-ending
29•SturgeonsLaw•3h ago•19 comments

Work disincentives hit the near-poor hardest (2022)

https://www.niskanencenter.org/work-disincentives-hit-the-near-poor-hardest-why-and-what-to-do-ab...
46•folump•5d ago•19 comments

Turtletoy

https://turtletoy.net/
18•ustad•4d ago•1 comments

An Interactive Guide to the Fourier Transform

https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/
164•pykello•5d ago•20 comments

What the heck is going on at Apple?

https://www.cnn.com/2025/12/06/tech/apple-tim-cook-leadership-changes
89•methuselah_in•11h ago•100 comments

Vibe Coding: Empowering and Imprisoning

https://www.anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/
31•zdw•5d ago•19 comments

Scala 3 slowed us down?

https://kmaliszewski9.github.io/scala/2025/12/07/scala3-slowdown.html
195•kmaliszewski•13h ago•121 comments

Socialist ends by market means: A history

https://lucasvance.github.io/2100/history/
33•sirponm•1h ago•6 comments

Toyota Unintended Acceleration and the Big Bowl of "Spaghetti" Code (2013)

https://www.safetyresearch.net/toyota-unintended-acceleration-and-the-big-bowl-of-spaghetti-code/
19•SoKamil•3h ago•14 comments

How I block all online ads

https://troubled.engineer/posts/no-ads/
109•StrLght•6h ago•87 comments

The Anatomy of a macOS App

https://eclecticlight.co/2025/12/04/the-anatomy-of-a-macos-app/
202•elashri•15h ago•59 comments

Impacts of working from home on mental health tracked in study of 16K Aussies

https://www.abc.net.au/news/2025-12-05/australian-working-from-home-mental-health-impacts-tracked...
10•anotherevan•3d ago•6 comments

CATL expects oceanic electric ships in 3 years

https://cleantechnica.com/2025/12/05/catl-expects-oceanic-electric-ships-in-3-years/
83•thelastgallon•1d ago•72 comments

Build a DIY magnetometer with a couple of seasoning bottles

https://spectrum.ieee.org/listen-to-protons-diy-magnetometer
74•nullbyte808•1w ago•17 comments

Show HN: Cdecl-dump - represent C declarations visually

https://github.com/bbu/cdecl-dump
12•bluetomcat•3h ago•6 comments

Millions of Americans mess up their taxes, but a new law will help

https://www.wakeuptopolitics.com/p/millions-of-americans-mess-up-their
52•toomuchtodo•9h ago•33 comments

Spinlocks vs. Mutexes: When to Spin and When to Sleep

https://howtech.substack.com/p/spinlocks-vs-mutexes-when-to-spin
47•birdculture•3h ago•10 comments

A two-person method to simulate die rolls (2023)

https://blog.42yeah.is/algorithm/2023/08/05/two-person-die.html
55•Fraterkes•2d ago•34 comments

Nested Learning: A new ML paradigm for continual learning

https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
92•themgt•13h ago•2 comments

Estimates are difficult for developers and product owners

https://thorsell.io/2025/12/07/estimates.html
170•todsacerdoti•9h ago•182 comments

The state of Schleswig-Holstein is consistently relying on open source

https://www.heise.de/en/news/Goodbye-Microsoft-Schleswig-Holstein-relies-on-Open-Source-and-saves...
524•doener•14h ago•238 comments

I wasted years of my life in crypto

https://twitter.com/kenchangh/status/1994854381267947640
75•Anon84•15h ago•113 comments

Java Hello World, LLVM Edition

https://www.javaadvent.com/2025/12/java-hello-world-llvm-edition.html
168•ingve•16h ago•60 comments
Open in hackernews

Vibe Coding: Empowering and Imprisoning

https://www.anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/
31•zdw•5d ago

Comments

Aperocky•1h ago
There are too much of both fear and optimism in what's is essentially a better compiler and google.

Eventually we will gravitate back to square one, and business people are not going to be writing COBOL or VISUAL BASIC or the long list of eventual languages (yes this now include natural ones, like English) that claim to be so easy that a manager would write it. And Googling/Prompting remain a skill that surprisingly few has truly mastered.

Of course all the venture capital believe that soon we'll be at AGI, but like the internet bubble of 2001 we can awkwardly stay at this stage for quite a long time.

crinklewrinkle•1h ago
> A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.

Why not ask ChatGPT?

wilg•1h ago
The entire premise which he summarizes as:

> A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages

is just pure speculation, totally unsupported, and almost certainly untrue, and makes very little sense given the way LLMs and ChatGPT in particular came about. Every time I read something from Anil Dash it seems like it's this absolutely braindead sort of "analysis".

aaron_m04•1h ago
Why do you say it's almost certainly untrue? Capital is well known for trying to suppress wages.
viraptor•1h ago
The amounts spent there have no practical chance of return in a reasonable timeframe. There's not that many devs that would be actually eliminated.
chickensong•1h ago
Agreed, and the following summary point:

> Vibe coding might limit us to making simpler apps instead of the radical innovation we need to challenge Big Tech

is also pure speculation and doesn't make sense. In fact, enabling people to create small and simple apps could well indeed challenge and weaken dependence on big tech.

I stopped reading and closed the page.

lubujackson•1h ago
In whatever way this is true, it has very little to do with sticking it to "coders" but is about magically solving/automating processes of any kind. Replacing programmers is small potatoes, and ultimately not a good candidate for jobs to replace. Programmers are ideal future AI operators!

What AI usage has underlined is that we are forever bound by our ability to communicate precisely what we want the AI to do for us. Even if LLMs are perfect, if we give it squishy instructions we get squishy results. If we give it a well-crafted objective and appropriate context and all the rest, it can respond just about perfectly. Then again, that is a lot of what programming has always been about in the first place - translate human goals into actionable code. Only the interface and abstraction level has changed.

apical_dendrite•37m ago
I recently heard a C-suite executive at a unicorn startup describe a particular industry as made up of small-scale, prideful craftsmen who will be unable to compete with agentic AI.

I don't know how much "VCs and tech tycoons" want to undermine coders specifically, but they see a huge opportunity to make money by making things much more efficiently (and thus cheaper) than they can be made now. The way to they plan to do that is to reduce the cost of labor. Which means either automating away jobs or making jobs much less specialized so that you don't need a highly-paid craftsman.

Think about Henry Ford setting up an assembly line where a worker sits at the same location and performs the same action all day, every day. You don't need a highly-skilled, highly-paid person with leverage and power to do that job.

FarmerPotato•1h ago
But how much of this article was written by an LLM? Cliches, listicles, fluffy abstractions abandoned and not developed...

Was there anything original in it? I'd like to ask this article, what was your knowledge cut-off date?

fbrncci•10m ago
I can't wait for the days where LLM written articles are indistinguishable from real writing, so people stop complaining about this. I am giving that another 6 months. In a lot of cases its not just lazy prompt -> article, but rather text synthesis through LLMs -> article. But people will still complain /rant (bias: I run a blog with only AI written content, but a loyal audience).
siliconc0w•1h ago
I was working on a new project and I wanted to try out a new frontend framework (data-star.dev). What you quickly find out is that LLMs are really tuned to like react and their frontend performance drops pretty considerably if you aren't using it. Like even pasting the entire documentation in context, and giving specific examples close to what I wanted, SOTA models still hallucinated the correct attributes/APIs. And it isn't even that you have to use Framework X, it's that you need to use X as of the date of training.

I think this is one of the reasons we don't see huge productivity gains. Most F500 companies have pretty proprietary gnarly codebases which are going to be out-of-distribution. Context-engineering helps but you still don't get near the performance you get with in-distribution. It's probably not unsolvable but it's a pretty big problem ATM.

NewsaHackO•32m ago
I use it with Angular and Svelte and it works pretty well. I used to use Lit, which at least the older models did pretty bad at, but it is less known so expected.
JimDabell•23m ago
Yes, Claude Opus 4.5 recently scored 100% on SvelteBench:

https://khromov.github.io/svelte-bench/benchmark-results-mer...

I found that LLMs sometimes get confused by Lit because they don’t understand the limitations of the shadow DOM. So they’ll do something like throw an event and try to catch it from a parent and treat it normally, not realising that the shadow DOM screws that all up, or they assume global / reset CSS will apply globally when you actually need to reapply it to every single component.

What I find interesting is all the platforms like Lovable etc. seem to be choosing Supabase, and LLMs are pretty terrible with that – constantly getting RLS wrong etc.

pan69•16m ago
> What you quickly find out is that LLMs are really tuned to like react

Sounds to me like that there is simply more React code to train the model on.

ehnto•15m ago
That is the "big issue" I have found as well. Not only are enterprise codebases often proprietary, ground up architectures, the actual hard part is business logic, locating required knowledge, and taking into account a decade of changing business requirements. All of that information is usually inside a bunch of different humans heads and by the time you get it all out and processed, code is often a small part of the task.
atrettel•1h ago
I agree with the notion that LLMs may just end up repeating coding mistakes of the past because they are statistically likely mistakes.

I'm reminded of an old quote by Dijkstra about Fortran [1]: "In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other's programs, bugs included."

I've encountered that same problem in some older scientific codes (both C and Fortran). After a while, the bugs somewhat become features because people just don't know to question them anymore. To me, this is why it is important to understand the code thoroughly enough to question what is going on (regardless of who or what wrote it).

[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498...

throwaway150•1h ago
> You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at.

I don't understand this argument. I mean the same applies for books. All books teach you what has come before. Nobody says "You can't make anything truly radical with books". Radical things are built by people after reading those books. Why can't people build radical things after learning or after being assisted by LLMs?

AdieuToLogic•3m ago
>> You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at.

> I don't understand this argument. I mean the same applies for books. All books teach you what has come before. Nobody says "You can't make anything truly radical with books". Radical things are built by people after reading those books.

Books share concepts expressed by people understanding those concepts (or purporting to do so) in a manner which is relatable to the reader. This is achievable due to a largely shared common lived experience as both parties are humans.

In short, people reason, learn, remember, and can relate with each other.

> Why can't people build radical things after learning ...

They absolutely can and often do.

> ... or after being assisted by LLMs?

Therein lies the problem. LLMs are not assistants.

They are statistical token (text) document generators. That's it.

spankalee•38m ago
I'm not sure that "radical" apps aren't more often built because we don't know how, than because we don't have the time, funding, or risk budget to do it.

For those cases, I think LLM-assisted coding has the ability to drastically change the usual formula and help bring into being projects that people would previously only daydream about, hoping that the world aligns with their vision one day and they can magically spin up a team to work on it.

Coding agents are fast becoming at least a part of that team. If you're idea is in a domain where they've had a lot of high-quality training code, they can already do a pretty amazing job of getting a project off the ground. If you're a programmer with at least some domain knowledge and can guide the design and push the agent past tough spots, you can keep the project going when the LLMs get bogged down.

I think at the very least, we're going to see some incredibly impressive prototypes, if not complete implementations, of OSes, programming languages, hypermedia systems, protocols, etc. because one passionate programmer threw a lot of LLM time them.

Basically lots of people are going to be able to build their own TempleOS now. Some of those might end up being impactful.