frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

My favorite use-case for AI is writing logs

https://newsletter.vickiboykis.com/archive/my-favorite-use-case-for-ai-is-writing-logs/
47•todsacerdoti•1h ago•22 comments

ChatGPT agent: bridging research and action

https://openai.com/index/introducing-chatgpt-agent/
471•Topfi•8h ago•336 comments

Mistral Releases Deep Research, Voice, Projects in Le Chat

https://mistral.ai/news/le-chat-dives-deep
429•pember•10h ago•89 comments

Perfume reviews

https://gwern.net/blog/2025/perfume
142•surprisetalk•1d ago•75 comments

Hand: open-source Robot Hand

https://github.com/pollen-robotics/AmazingHand
341•vineethy•13h ago•94 comments

Anthropic tightens usage limits for Claude Code without telling users

https://techcrunch.com/2025/07/17/anthropic-tightens-usage-limits-for-claude-code-without-telling-users/
201•mfiguiere•4h ago•111 comments

Mammals Evolved into Ant Eaters 12 Times Since Dinosaur Age, Study Finds

https://news.njit.edu/mammals-evolved-ant-eaters-12-times-dinosaur-age-study-finds
28•zdw•2h ago•14 comments

People kept working, became healthier while on basic income: report (2020)

https://www.cbc.ca/news/canada/hamilton/basic-income-mcmaster-report-1.5485729
133•jszymborski•2h ago•109 comments

All AI models might be the same

https://blog.jxmo.io/p/there-is-only-one-model
136•jxmorris12•8h ago•73 comments

My experience with Claude Code after two weeks of adventures

https://sankalp.bearblog.dev/my-claude-code-experience-after-2-weeks-of-usage/
126•dejavucoder•7h ago•105 comments

Apple Intelligence Foundation Language Models Tech Report 2025

https://machinelearning.apple.com/research/apple-foundation-models-tech-report-2025
172•2bit•7h ago•117 comments

Creating an autonomous system for fun and profit (2017)

https://blog.thelifeofkenneth.com/2017/11/creating-autonomous-system-for-fun-and.html
15•cristoperb•3d ago•1 comments

A look at IBM's short-lived "butterfly" ThinkPad 701 of 1995

https://www.fastcompany.com/91356463/ibm-thinkpad-701-butterfly-keyboard
14•vontzy•2d ago•2 comments

Self-taught engineers often outperform (2024)

https://michaelbastos.com/blog/why-self-taught-engineers-often-outperform
143•mbastos•10h ago•115 comments

23andMe is out of bankruptcy. You should still delete your DNA

https://www.washingtonpost.com/technology/2025/07/17/23andme-bankruptcy-privacy/
25•1vuio0pswjnm7•2h ago•2 comments

Show HN: PlutoFilter- A single-header, zero-allocation image filter library in C

https://github.com/sammycage/plutofilter
45•sammycage•3d ago•8 comments

Felix Baumgartner, who jumped from edge of space, dies in paragliding accident

https://www.theguardian.com/sport/2025/jul/18/skydive-pioneer-felix-baumgartner-who-jumped-from-edge-of-space-dies-in-paragliding-accident
11•pseudolus•55m ago•2 comments

Run TypeScript code without worrying about configuration

https://tsx.is/
45•nailer•8h ago•33 comments

Archaeologists discover tomb of first king of Caracol

https://uh.edu/news-events/stories/2025/july/07102025-caracol-chase-discovery-maya-ruler.php
135•divbzero•3d ago•29 comments

Extending That XOR Trick to Billions of Rows

https://nochlin.com/blog/extending-that-xor-trick
3•hundredwatt•3d ago•0 comments

Delaunay Mesh Generation (2012)

https://people.eecs.berkeley.edu/~jrs/meshbook.html
11•ibobev•3d ago•5 comments

Game of trees hub

https://gothub.org/
21•todsacerdoti•2d ago•4 comments

Writing a competitive BZip2 encoder in Ada from scratch in a few days (2024)

https://gautiersblog.blogspot.com/2024/11/writing-bzip2-encoder-in-ada-from.html
92•etrez•3d ago•52 comments

Louisiana cancels $3B coastal repair funded by oil spill settlement

https://apnews.com/article/louisiana-coastal-restoration-gulf-oil-spill-affaae2877bf250f636a633a14fbd0c7
8•geox•49m ago•0 comments

Ask HN: What Pocket alternatives did you move to?

49•ahmedfromtunis•5h ago•67 comments

On doing hard things

https://parv.bearblog.dev/kayaking/
223•speckx•3d ago•81 comments

Stone blocks from the Lighthouse of Alexandria recovered from seafloor

https://archaeologymag.com/2025/07/lighthouse-of-alexandria-rises-again/
74•gnabgib•4d ago•13 comments

3D-printed living lung tissue

https://news.ok.ubc.ca/2025/07/15/ubco-researchers-create-3d-printed-living-lung-tissue/
19•gmays•8h ago•7 comments

Show HN: Easy alternative to giflib – header-only decoder in C

https://github.com/Ferki-git-creator/TurboStitchGIF-HeaderOnly-Fast-ZeroAllocation-PlatformIndependent-Embedded-C-GIF-Decoder
13•FerkiHN•13h ago•4 comments

Rejoy Health (YC W21) Is Hiring

https://www.ycombinator.com/companies/rejoy-health/jobs/DCsxNgv-software-engineer
1•rituraj_rhealth•13h ago
Open in hackernews

The AI Replaces Services Myth

https://aimode.substack.com/p/the-ai-replaces-services-myth
70•warthog•6h ago

Comments

tuatoru•5h ago
The title is slightly misleading.

What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans. That idea being muddle-headed.

It points out that businesses think of AI as software, and will pay software-level money for AI, not wage-level money. It finishes with the rhetorical question, are you paying $100k/year to an AI company for each coder you no longer need?

tines•5h ago
Not sure I quite get the point of the article. Sure, you won't capture $100k/year/dev. But if you capture $2k/year/dev, and you replace every dev in the world... that's the goal right?
gh0stcat•5h ago
I don't think the value stacks like that. Hiring 10 low level workers that you can pay 1/10th the salary to replace one higher level worker doesn't work.
RedOrZed•5h ago
Sure it does! Let me just hire 9 women for 1 month...
aerostable_slug•5h ago
They're saying expectations that AI revenues will equal HR expenditures, like you can take the funds from one column to the other, are wrong-headed. That makes sense to me.
tines•3h ago
I agree, but that doesn't have to be true for investors to be salivating, is my point.
blibble•5h ago
that $2k won't last long as you will never maintain a margin on a service like that

employee salaries are high because your competitors can't spawn 50000 into existence by pushing a button

competition in the industry will destroy its own margins, and then its own customer base very quickly

soon after followed by the economies of the countries they're present in

the whole thing is a capitalism self destruct button, for entire economies

Revisional_Sin•5h ago
> What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans.

Is anyone actually claiming this?

lelandbatey•2h ago
Not directly, but indirectly. It's what's leading to the FOMO w.r.t. investors. See the image in the parent blog post, where VCs are directly comparing the amount of money spent on AI at the moment (tiny) against the amount of money spent on headcount in various industries, with the implication being that "AI could be making all the money that was being spent on headcount, what an opportunity!"
satyrnein•3h ago
It's almost more of a warning to founders and VCs, that an AI developer that replaces a $100k/year developer might only get them $10k/year in revenue.

But that means that AI just generated a $90k consumer surplus, which on a societal level, is huge!

jsnk•5h ago
""" Not because AI can't do the work. It can.

But because the economics don't translate the way VCs claim. When you replace a $50,000 employee with AI, you don't capture $50,000 in software revenue. You capture $5,000 if you're lucky. """

So you are saying, AI does replace labour.

warthog•5h ago
Maybe I should change the title indeed. Intention was to point to the fact that from the perspective of a startup, even if you replace it fully, you are not capturing 100x the previous market.
graphememes•5h ago
Realistically, AI makes the easiest part of the job easier, not all the other parts.
deepfriedbits•5h ago
For now
DanHulton•5h ago
Citation needed.
bgroins•5h ago
History
th0ma5•4h ago
Good thing we solved lipid disorders with Olean, Betamax gave us all superior home video, and you can monetize your HN comments with NFTs or else I wouldn't have any money to post!
pjmlp•5h ago
Experience in industrial revolution, and factory automation.
eikenberry•4h ago
So you mean in a hundred years it so? I don't think that is a good counter.
pjmlp•4h ago
If you think the time is what matters from the history lesson, good luck.
Quarrelsome•5h ago
Do execs really dream of entirely removing their engineering departments? If this happens then I would expect some seriously large companies to fail in the future. For every good idea an exec has, they have X bad ideas that will cause problems and their engineers save them from those. Conversely an entirely AI engineering team will say "yes sir, right on it" to every request.
pjmlp•5h ago
Yes, that is exactly how offshoring and enterprise consulting takes place.
eikenberry•4h ago
.. and why they fail.
pjmlp•4h ago
Apparently not, given that it is the bread and butter of Fortune 500 consulting.
crinkly•4h ago
The consultancies are successful. The customers aren’t usually quite as fortunate from experience.

A great example is the current Tata disaster in the UK with M&S.

pjmlp•4h ago
Yet they keep sending RFPs and hiring consultancies, because at the end of the day what matters are those Excel sheets, not what people on the field think about the service.

Some C level MBAs get a couple of lunches together, or a golf match, exchange a bit of give and take, discounts for the next gig, business as usual.

Have you seen how valuable companies like Tata are, despite such examples?

crinkly•4h ago
Yes and you allude to the problem: you can make a turd look good with the right numbers.
pjmlp•4h ago
Doesn't change the facts, and how appealing AI is for those companies management.
crinkly•4h ago
Yes. Execs love AI because it’s the sycophant they need to massage their narcissism.

I’d really love to be replaced by AI. At that point I can take a few months off paid gardening leave before they are forced to rehire me.

Quarrelsome•3h ago
Idk I feel like execs would run out of make up before they accept their ideas are a pig. I worry this stuff is gonna work "just enough" to let them fool themselves for long enough to sink their orgs.

I'm envisioning a blog post on linkedin in the future:

> "How Claude Code ruined my million dollar business"

crinkly•3h ago
Working out how to capitalise on their failures is the only winning proposition. My brother did pretty well out of selling Aerons.
AkshatM•4h ago
> Need an example? Good. Coding.

> You must be paying your software engineers around $100,000 yearly.

> Now that vibecoding is out there, when was the last time you committed to pay $100,000 to Lovable or Replit or Claude?

I think the author is attacking a bit of a strawman. Yes, people won't pay human prices for AI services.

But the opportunity is in democratization - becoming the dominant platform - and bundling - taking over more and more of the lifecycle.

Your customers individually spend less, but you get more customers, and each customer spends a little extra for better results.

To respond to the analogy: not everyone had $100,000 to build their SaaS before. Now everyone who has a $100 budget can buy Lovable, Replit and Claude subbscriptions. You only need 1,000 customers to match what you made before.

Sol-•3h ago
How much demand for software is there, though? I don't buy the argument that the cake will grow faster than jobs are devalued. On the bright side, prices might collapse accordingly and we'll end up in some post scarcity world. No money in software, but also no cost, maybe.
kelseyfrog•4h ago
> You have to start from how the reality works and then derive your work.

Every philosopher eventually came to the same realization: We don't have access to the world as it is. We have access to a model of the world as it predicts and is predicted by our senses. In so far as there is a correlation between the two in whatever fidelity we can muster, we are fated to direct access to a simulacrum.

For the most part they agree, but we have a serious flaw - our model inevitably influences our interpretation of our senses. This sometimes gets us into trouble when aspects of our model become self-reinforcing by framing sense input in ways that amplify the part of the model that confers the frame. For example, you live in a very different world if you search for and find confirmation for cynicism.

Arguing over metaphysical ontology is exemplified by kids fighting about which food (their favorite) is the best. It confuses subjectivity and objectivity. It might appear radical, but all frames are subjective even ones shared by the majority of others.

Sure, Schopenhauer's philosophy is the mirror of his own nature, but there is no escape hatch. There is no externality - no objective perch to rest on, even ones shared by others. That's not to say that all subjectivities are equally useful for navigating the world. Some models work better than others for prediction, control, and survival. But we should be clear that useful does not equate with truth, as all models are wrong, some are useful.

JC, I read the rest. The author doesn’t seem to grasp how profit actually works. Price and value are not welded together: you can sell something for more or less than the value it generates. Using his own example, if the AI and the human salesperson do the same work, their value is identical, independent of what each costs or commands in the market.

He seems wedded to a kind of market value realism, and from this shaky premise, he arrives at some bizarre conclusions.

harwoodjp•3h ago
Your dualism between model and world is nearly Cartesian. The model itself isn't separate from the world but produced materially (by ideology, sociality, naturally, etc.).
nine_k•2h ago
A map drawn on a flat piece of land is still not the whole land it depicts, even though it literally consists of that land. Any representation is a simplification, as much as we can judge, there's no adequately lossless compressing transform of large enough swaths of reality.
kelseyfrog•1h ago
> The model itself isn't separate from the world but produced materially

To me this is like drawing a circuit diagram on a piece of paper and trying to convince someone that, "Really there is electricity flowing through it."

Models are relations between signifiers. There exists a transformation between the signified relations and the relations of the signifiers, but they are, in fact, two separate categories and the transformation isn't bijective. ie it doesn't form an isomorphism.

card_zero•2h ago
Urgh. I feel the stodge of relativism weighing down on me.

OK, yes, all models (and people) are wrong. I'll also allow that usefulness is not the same as verisimilitude (truthiness). But there is externality, even though nobody can as you say "perch" on it: it's important that there is objective reality to approach closer to, however uncertainly.

kelseyfrog•1h ago
I'm willing to grant non-symbolic externality. Though, I don't know how useful that is.

We will never access the signified only the signifier. When we believe that signifiers exist externally, we are engaging in a suspension of epistemic honesty, and I get why we do it - it makes talking about and engaging with the world infinitely easier. But we shouldn't ever believe our own trick. That's reverting to a pre-operational version of cognition.

neuroelectron•3h ago
I have yet to see LLMs solve any new problems. I think it's pretty clear a lot of the bouncing ball programming demos are specifically trained on to be demoed at a marketing/advertising thing. Asking AI the most basic logical question about a random video game like, what element synergies with ice spike shield in Dragon Cave Masters and it will make up some nonsense despite it being something you can look up on gamefaqs.org. Now I know it knows the game I'm talking about but in the latent space it's just another set of dimensions that flavor likely next token patterns.

Sure, if you train an LLM enough on gamefaqs.org, it will be able to answer my question as accurately as an SQL query, and there's a lot of jobs that are just looking up answers that already exist, but these systems are never going to replace engineering teams. Now, I definitely have seen some novel ideas come out of LLMs, especially in earlier models like GPT-3, where hallucinations were more common and prompts weren't normalized into templates, but now we have "mixtures" of "experts" that really keep LLMs from being general intelligences.

XenophileJKO•3h ago
I don't know, I've had O3 create some surprisingly effective Magic the Gathering decks based on newly released cards it has never seen. It just has to look up what cards are available.
outworlder•3h ago
I don't disagree, but your comment is puzzling. You start talking about a game (which probably lacks a lot of training data) and then extrapolate that to mean AI won't replace engineering teams. What?

We do not need AGI to cause massive damage to software engineering jobs. A lot of existing work is glue code, which AI can do pretty well. You don't need 'novel' solutions to problems to have useful AI. They don't need to prove P = NP

sublinear•3h ago
Can you give an example of a non-trivial project that is pure glue code?
arevno•1h ago
Parent never said pure glue code, they said "a lot", which is roughly correct.

Any nontrivial business application will be on the order of ~60% glue, API, interface/model definition, and CRUD UI code, which LLMs are already quite good at.

They're also good at writing tests, with the caveat that a human reviews them.

They're pretty decent at emitting documentation from pure code, too.

The only way these models don't result in mass unemployment in this industry is if the amount of work required expands to fill the gap. Which is certainly possible! The Jevons Paradox of software development.