frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Apple is fighting for TSMC capacity as Nvidia takes center stage

https://www.culpium.com/p/exclusiveapple-is-fighting-for-tsmc
378•speckx•5h ago•256 comments

CVEs Affecting the Svelte Ecosystem

https://svelte.dev/blog/cves-affecting-the-svelte-ecosystem
86•tobr•2h ago•12 comments

JuiceFS is a distributed POSIX file system built on top of Redis and S3

https://github.com/juicedata/juicefs
24•tosh•1h ago•14 comments

Inside The Internet Archive's Infrastructure

https://hackernoon.com/the-long-now-of-the-web-inside-the-internet-archives-fight-against-forgetting
77•dvrp•1d ago•10 comments

Ask HN: How can we solve the loneliness epidemic?

124•publicdebates•3h ago•219 comments

Claude is good at assembling blocks, but still falls apart at creating them

https://www.approachwithalacrity.com/claude-ne/
57•bblcla•1d ago•36 comments

25 Years of Wikipedia

https://wikipedia25.org
323•easton•6h ago•277 comments

First impressions of Claude Cowork

https://simonw.substack.com/p/first-impressions-of-claude-cowork
61•stosssik•1d ago•24 comments

Design and Implementation of Sprites

https://fly.io/blog/design-and-implementation/
74•sethev•4h ago•55 comments

Supply Chain Vuln Compromised Core AWS GitHub Repos & Threatened the AWS Console

https://www.wiz.io/blog/wiz-research-codebreach-vulnerability-aws-codebuild
35•uvuv•2h ago•2 comments

Claude Cowork runs Linux VM via Apple virtualization framework

https://gist.github.com/simonw/35732f187edbe4fbd0bf976d013f22c8
38•jumploops•1d ago•18 comments

UK offshore wind prices come in 40% cheaper than gas in record auction

https://electrek.co/2026/01/14/uk-offshore-wind-record-auction/
42•doener•1h ago•11 comments

Show HN: Tabstack – Browser infrastructure for AI agents (by Mozilla)

65•MrTravisB•1d ago•8 comments

Show HN: OpenWork – an open-source alternative to Claude Cowork

https://github.com/different-ai/openwork
34•ben_talent•1d ago•9 comments

Found: Medieval Cargo Ship – Largest Vessel of Its Kind Ever

https://www.smithsonianmag.com/smart-news/archaeologists-say-theyve-unearthed-a-massive-medieval-...
73•bookofjoe•4h ago•14 comments

Show HN: TinyCity – A tiny city SIM for MicroPython (Thumby micro console)

https://github.com/chrisdiana/TinyCity
97•inflam52•5h ago•16 comments

The URL shortener that makes your links look as suspicious as possible

https://creepylink.com/
716•dreadsword•16h ago•133 comments

‘ELITE’: The Palantir app ICE uses to find neighborhoods to raid

https://werd.io/elite-the-palantir-app-ice-uses-to-find-neighborhoods-to-raid/
164•sdoering•1h ago•83 comments

Zuck#: A programming language for connecting the world. And harvesting it

https://jayzalowitz.github.io/zucksharp/
44•kf•1h ago•21 comments

Goscript: Transpile Go to human-readable TypeScript

https://github.com/aperturerobotics/goscript
12•aperturecjs•4d ago•3 comments

Jiga (YC W21) Is Hiring Full Stack Engineers

https://jiga.io/about-us
1•grmmph•8h ago

The 3D Software Rendering Technology of 1998's Thief: The Dark Project (2019)

https://nothings.org/gamedev/thief_rendering.html
112•suioir•9h ago•48 comments

OBS Studio 32.1.0 Beta 1 available

https://github.com/obsproject/obs-studio/releases/tag/32.1.0-beta1
123•Sean-Der•5h ago•33 comments

Ask HN: Anyone have a good solution for modern Mac to legacy SCSI converters?

14•stmw•1h ago•27 comments

Sinclair C5

https://en.wikipedia.org/wiki/Sinclair_C5
74•jszymborski•4d ago•47 comments

Ask HN: Share your personal website

799•susam•1d ago•2143 comments

GitHub Incident

https://www.githubstatus.com/incidents/q987xpbqjbpl
97•aggrrrh•3h ago•73 comments

Italy's privacy watchdog, scourge of US big tech, hit by corruption probe

https://www.reuters.com/sustainability/boards-policy-regulation/italys-privacy-watchdog-scourge-u...
42•giuliomagnifico•2h ago•12 comments

Programming, Evolved: Lessons and Observations

https://github.com/kulesh/dotfiles/blob/main/dev/dev/docs/programming-evolved.md
42•dnw•6h ago•22 comments

Show HN: ContextFort – Visibility and controls for browser agents

https://contextfort.ai/
8•ashwinr2002•1d ago•1 comments
Open in hackernews

Claude is good at assembling blocks, but still falls apart at creating them

https://www.approachwithalacrity.com/claude-ne/
56•bblcla•1d ago

Comments

mikece•1d ago
In my experience Claude is like a "good junior developer" -- can do some things really well, FUBARS other things, but on the whole something to which tasks can be delegated if things are well explained. If/when it gets to the ability level of a mid-level engineer it will be revolutionary. Typically a mid-level engineer can be relied upon to do the right thing with no/minimal oversight, can figure out incomplete instructions, and deliver quality results (and even train up the juniors on some things). At that point the only reason to have human junior engineers is so they can learn their way up the ladder to being an architect and responsible coordinating swarms of Claude Agents to develop whole applications and complete complex tasks and initiatives.

Beyond that what can Claude do... analyze the business and market as a whole and decide on product features, industry inefficiencies, gap analysis, and then define projects to address those and coordinate fleets of agents to change or even radically pivot an entire business?

I don't think we'll get to the point where all you have is a CEO and a massive Claude account but it's not completely science fiction the more I think about it.

alfalfasprout•1h ago
> I don't think we'll get to the point where all you have is a CEO and a massive Claude account but it's not completely science fiction the more I think about it.

At that point, why do you even need the CEO?

ako•1h ago
And who does he sell his software to? Companies that have only 1 employee, don’t need a lot of user licenses for their employees…
AshamedCaptain•53m ago
What would be the point of selling software in such a world ? (where anyone could build any piece of software with a handful of keystrokes)
arjie•1h ago
Reminds me of an old joke[0]:

> The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.

But really, the reason is that people like Pieter Levels do exist: masters at product vision and marketing. He also happens to be a proficient programmer, but there are probably other versions of him which are not programmers who will find the bar to product easier to meet now.

0: https://quoteinvestigator.com/2022/01/30/future-factory/

MrDunham•35m ago
My technical cofounder reminds me of this story on a weekly basis.
ceejayoz•1h ago
As Steinbeck is often slightly misquoted:

> Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires.

Same deal here, but everyone imagines themselves as the billionaire CEO in charge of the perfectly compliant and effective AI.

tiku•1h ago
For the network.
mettamage•57m ago
All of us are a CEO by that point.
ArtificialAI•55m ago
If everyone is, no one is.
empath75•41m ago
Wouldn't that be a good thing?
pixelready•35m ago
The board (in theory) represents the interests of investors, and even with all of the other duties of a CEO stripped away, they will want a ringable neck / PR mouthpiece / fall guy for strategic missteps or publicly unpopular moves by the company. The managerial equivalent of having your hands on the driving wheel of a self-driving car.
jerf•19m ago
You will need the CEO to watch over the AI and ensure that the interests of the company are being pursued and not the interests of the owners of the AI.

That's probably the biggest threat to the long-term success of the AI industry; the inevitable pull towards encroaching more and more of their own interests into the AI themselves, driven by that Harvard Business School mentality we're all so familiar with, trying to "capture" more and more of the value being generated and leaving less and less for their customers, until their customer's full time job is ensuring the AIs are actually generating some value for them and not just the AI owner.

maxilevi•1h ago
LLMs are just really good search. Ask it to create something and it's searching within the pretrained weights. Ask it to find something and it's semantically searching within your codebase. Ask it to modify something and it will do both. Once you understand its just search, you can get really good results.
johnisgood•54m ago
Calling it "just search" is like calling a compiler "just string manipulation". Not false, but aggressively missing the point.
emp17344•50m ago
No, “just search” is correct. Boosters desperately want it to be something more, but it really is just a tool.
johnisgood•49m ago
Yes, it is a tool. No, it is not "just search".

Is your CPU running arbitrary code "just search over transistor states"?

Calling LLMs "just search" is the kind of reductive take that sounds clever while explaining nothing. By that logic, your brain is "just electrochemical gradients".

RhythmFox•46m ago
I mean, actually not a bad metaphor, but it does depend on the software you are running as to how much of a 'search' you could say the CPU is doing among its transistor states. If you are running an LLM then the metaphor seems very apt indeed.
jvanderbot•42m ago
What would you add?

To me it's "search" like a missile does "flight". It's got a target and a closed loop guidance, and is mostly fire and forget (for search). At that, it excels.

I think the closed loop+great summary is the key to all the magic.

bitwize•33m ago
Which is kind of funny because my standard quip is that AI research, beginning in the 1950s/1960s, and indeed much of late 20th century computer tech especially along the Boston/SV axis, was funded by the government so that "the missile could know where it is". The DoD wanted smarter ICBMs that could autonomously identify and steer toward enemy targets, and smarter defense networks that could discern a genuine missile strike from, say, 99 red balloons going by.
soulofmischief•30m ago
It's a prediction algorithm that walks a high-dimensional manifold, in that sense all application of knowledge it just "search", so yes, you're fundamentally correct but still fundamentally wrong since you think this foundational truth is the end and beginning of what LLMs do, and thus your mental model does not adequately describe what these tools are capable of.
jvanderbot•8m ago
Me? My mental model? I gave an analogy for Claude not a explanation for LLMs.

But you know what? I was mentally thinking of both deep think / research and Claude code, both of which are literally closed loop. I see this is slightly off topic b/c others are talking about the LLM only.

oliverbennett•19m ago
It feels like defining LLMs by what they're good at. Which also includes things like summarisation and grouping things.
maxilevi•11m ago
I don't mean search in the reductionist way but rather that its much better at translating, finding and mapping concepts if everything is provided vs creating from scratch. If it could truly think it would be able to bootstrap creations from basic principles like we do, but it really can't. Doesn't mean its not a great powerful tool.
bhadass•45m ago
better mental model: it's a lossy compression of human knowledge that can decompress and recombine in novel (sometimes useful, sometimes sloppy) ways.

classical search simply retrieves, llms can synthesize as well.

andrei_says_•43m ago
“Novel” to the person who has not consumed the training data. Otherwise, just training data combined in highly probable ways.

Not quite autocomplete but not intelligence either.

soulofmischief•32m ago
Citation needed that grokked capabilities in a sufficiently advanced model cannot combinatorially lead to contextually novel output distributions, especially with a skilled guiding hand.
RhythmFox•39m ago
This isn't strictly better to me. It captures some intuitions about how a neural network ends up encoding its inputs over time in a 'lossy' way (doesn't store previous input states in an explicit form). Maybe saying 'probabilistic compression/decompression' makes it a bit more accurate? I do not really think it connects to your 'synthesize' claim at the very end to call it compression/decompression, but I am curious if you had a specific reason to use the term.
michalsustr•42m ago
This article resonates exactly how I think about it as well. For example, at minfx.ai (a Neptune/wandb alternative), we cache time series that can contain millions of floats for fast access. Any engineer worth their title would never make a copy of these and would pass around pointers for access. Opus, when stuck in a place where passing the pointer was a bit more difficult (due to async and Rust lifetimes), would just make the copy, rather than rearchitect or at least stop and notify user. Many such examples of ‘lazy’ and thus bad design.
mklyachman•28m ago
Wow, what an excellent blog. Highly suggest trying out creator's tool (stardrift.ai) too!
simonw•11m ago
I'm not entirely convinced by the anecdote here where Claude wrote "bad" React code:

> But in context, this was obviously insane. I knew that key and id came from the same upstream source. So the correct solution was to have the upstream source also pass id to the code that had key, to let it do a fast lookup.

I've seen Claude make mistakes like that too, but then the moment you say "you can modify the calling code as well" or even ask "any way we could do this better?" it suggests the optimal solution.

My guess is that Claude is trained to bias towards making minimal edits to solve problems. This is a desirable property, because six months ago a common complaint about LLMs is that you'd ask for a small change and they would rewrite dozens of additional lines of code.

I expect that adding a CLAUDE.md rule saying "always look for more efficient implementations that might involve larger changes and propose those to the user for their confirmation if appropriate" might solve the author's complaint here.

bblcla•4m ago
(OP here).

Yeah, that's fair - a friend of mine also called this out on Twitter (https://x.com/konstiwohlwend/status/2010799158261936281) and I went into more technical detail about the specific problem there.

> I've seen Claude make mistakes like that too, but then the moment you say "you can modify the calling code as well" or even ask "any way we could do this better?" it suggests the optimal solution.

I agree, but I think I'm less optimistic than you that Claude will be able to catch its own mistakes in the future. On the other hand, I can definitely see how a ~more intelligent model might be able to catch mistakes on a larger and larger scale.

> I expect that adding a CLAUDE.md rule saying "always look for more efficient implementations that might involve larger changes and propose those to the user for their confirmation if appropriate" might solve the author's complaint here.

I'm not sure about this! There are a few things Claude does that seem unfixable even by updating CLAUDE.md.

Some other footguns I keep seeing in Python and constantly have to fix are: - writing lots of nested if clauses instead of writing simple functions by returning early - putting imports in functions instead of at the top-level - swallowing exceptions instead of raising (constantly a huge problem)

These are small, but I think it's informative of what Opus 4.5 is and isn't good at that it still fails at these relatively simple tasks.

joshribakoff•4m ago
I expect that adding instructions that attempt to undo training produces worse results than not including the overbroad generalization in the training in the first place. I think the author isn’t making a complaint they’re documenting a tradeoff.