frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
1•gmays•31s ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
1•gurjeet•1m ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•2m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•3m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•5m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•5m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•5m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
1•mooreds•6m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•7m ago•1 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•7m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•8m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•8m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•8m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•9m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•11m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•11m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•12m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•12m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•12m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•12m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•13m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•13m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•14m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•17m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•17m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•18m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•18m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•19m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•20m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•21m ago•0 comments
Open in hackernews

Ask HN: Are developers sad about AI writing more of their code?

14•JFerreol_J•6mo ago
I’ve been chatting with a few dev friends and colleagues about Cursor, copilot etc

and what surprised me was that their biggest feeling about this was neither excitement nor concern, but sadness.

Sadness in the sense that they are afraid the “fun” parts of their job (thinking, building, solving) might slowly be taken away. That they’ll become bored reviewers.

It got me wondering if other devs feel this way?

Are we really on a path where engineering turns into a supervisory job? or is it just temporary until it shifts into something radically different?

Curious to hear from dev folks here !

Comments

tomsayervin•6mo ago
Reading someone else code is the worst thing, so I tend to agree

This being said it does get some uninteresting things done very fast so I’m not entirely sad

JFerreol_J•6mo ago
Agreed it's not black or white. Do you feel 100% more productive than before though?
wryoak•6mo ago
Recently I considered trying to find a position for myself just reviewing code part time because I find that the funnest part. I find writing (significant amounts of) code quite tedious.

Hearing people say that code review/reading is the boring part makes me think maybe I should actually pursue this

amichail•6mo ago
I think indie devs love it since they can focus on manually coding the fun parts and leave the AI to code the boring parts.
JFerreol_J•6mo ago
Agreed on this, but do you think this split that we human are happy with can last forever?
JohnFen•6mo ago
I think the opinion amongst indie devs (at least the ones I know) is as varied as the opinion amongst other sorts of devs. Some love it, some hate it, some are neutral. It seems to depend on what it is about development that the particular dev enjoys and values.
notorious_pgb•6mo ago
tl;dr: Yes; at least some of us are. I deeply am.

I wouldn't normally post a link to my blog in the comments of another thread -- I'm really not trying to shamelessly plug here -- it's just _incredibly_ relevant, and I've already poured my heart and soul into writing out exactly what I think here, so I think it's germane:

https://prettygoodblog.com/p/the-big-secret-that-big-ai-does...

> I cannot write the necessary philosophical screed this deserves, but: there are things which make us human and in our individual humanity unique; these things are art and skill of any kind; and it so greatly behooves one to strive to master art in any and all its forms.

uncircle•6mo ago
The opening of your post made me think of a scifi writing prompt where, in a dystopic future, people are paying mega tech-corporations thousands of dollars per month to replace their meat brain with a subpar artificial one just to become more productive corporate ~~slaves~~workers.
notorious_pgb•6mo ago
It seems to me that once things like Neuralink start being used for more than strictly restoring functionality in injured or disabled individuals -- once there's a competitive advantage to have one -- it'll quickly become an arms race.
ahdanggit•6mo ago
I'm tired of getting thrown AI slop, but I think its a people problem not an AI problem.

"Hey, I made this doc, can you just make sure it looks OK, maybe add a couple things to it?" - Only to find out its completely useless AI slop and barely any details are correct and everything essential is absent.

Same situation with:

- "Hey, can you take a look at this script to see if it's ok"?

- "Hey, do you know why this code isn't working?"

- "Hey, I created that diagram..."

slop, slop, slop. Low effort people will put in low effort with these tools. I bet there's lots of people I work with that use AI and I don't know because they're high effort people.

And in all cases I've fixed the problems and helped them but I've realized two things and stop doing that recently:

When these folks use AI to generate artifacts, they take even less accountability "I dunno, that's just what the AI did..."

The also have no interest in learning. They get AI to do the thing they don't want to learn, then when that fails just try to get someone else to do it for them.

physicsguy•6mo ago
I find it a mix of really frustrating (mostly it not doing what I want it to do, making random changes alongside perfectly good changes of the type I want) and amazing when it works. With that said, I think it works best in an unconstrained environment, and when you start adding constraints (don't do it via this method, don't use this library or import any new dependencies) you find much less good results. This inevitably means it works better at the start of projects rather than coming on to something that's been around for a while and has it's own patterns.
JohnFen•6mo ago
It looks like LLMs will automate a lot of what I enjoy doing myself and increase the amount of work of the sort I dislike (such as reviewing code I didn't recently write).

So my worst-case scenario with LLMs in terms of my job is that they will make my job hard to tolerate. If that actually happens, I'll leave the field entirely as there would be no room for the likes of myself in it anymore.

silentpuck•6mo ago
I think the real sadness is that many developers may stop learning the deeper fundamentals — the things that AI can't replace.

When people start relying on the "I just want it to work this way" mentality and let AI take over, they can lose track of how things actually work under the hood. And that opens the door to a lot of problems — not just bugs, but blind trust in things they don't understand.

For me, the joy is in understanding. Understanding what each part does, how it all fits together. If that disappears, I think we lose something important along the way.

notorious_pgb•6mo ago
This is definitely the core issue from my perspective.
vedmakk•6mo ago
But in a way a 90s assembler dev would argue that todays developers are not understanding how things work "under the hood" at all. I guess with each generation we just abstract to higher layers and solve bigger problems while just "relying on things under the hood to work just fine".
silentpuck•6mo ago
Yeah, that’s a fair point. Abstraction is part of progress — and we do rely more and more on things “just working.”

But that trust can be dangerous when we don’t even know what we’re trusting. And when something breaks, it can leave us completely blind.

I’m not saying everyone needs to go all the way down to the metal — but I do think it’s important to at least understand the boundaries. Know what’s underneath, even if you don’t touch it every day.

Otherwise, it’s not engineering anymore — it’s guessing.

And I’ve never been good at just “believing” things work. I need to poke around, break them, fix them. That’s how I learn. Maybe I’m just stubborn like that.

jjice•6mo ago
I was scared of this initially, but I've found that I mostly just use it for tedious code that I would normally procrastinate just because of how mind numbing it was (like adding another CRUD endpoint) or making a sweeping change across code that wasn't as simple as a find and replace.

For things that keep me interested, I just won't use LLM features. Sometimes at the end I'll have it audit my code and sometimes it'll catch something that can be improved.

Also test cases. It's not perfect, but a large chunk of that being automated there is very nice.

iExploder•6mo ago
if you think about it the resource wasteful approach pre-LLM didnt really make sense (thousands of people often times re-implementing similar use cases). LLMs are like global open source repositories of everything known to mankind with search on steroids. we can never go back, if only for this one reason (imagine of how many hours of peoples lives were lost implementing the same function, or yet another CRUD app)... so if we cant go back whats next?

the paradigm is shifting from us not deciding how to do, but deciding what to do, maybe by writing requirements and constraints and letting AI figure out the details.

the skill will be in using specific language with AI to get the desired behavior, old code as we know it is garbage, new code is writing requirements and constraints

so in a sense we will not be reviewers, nor architects or designers, but writers of requirements and use cases in a specific LLM language, which will have its own but different challenges too

there might be still a place for cream of the crop mega talented people to solve new puzzles (still with AI assistance) in order to generate new input knowledge to train LLMs

mfalcon•6mo ago
I think that we will be reviewers too, we have to know if the AI generated artifact does what we want.
iExploder•6mo ago
ok, but this can be tested with black box tests, you can add performance tests on top to make sure there is no bloat over time

you might look at generated code as often as you look at generated assembly now

billylo•6mo ago
I'm retired and write apps as community givebacks and as a creative outlet.

No, I am not sad because I am in control. If there is something I want to take on as an intellectual challenge, I do it.

If it's just mechanical tasks or UI layout tweaking, AI is perfect. I can become the user who keeps asking for fine-tuning of corner radius. :-)

moomoo11•6mo ago
I mean if you’re building react buttons and using pydantic and think that makes you an engineer then yeah you’re getting replaced.

I don’t think people who know how the computer and networking works are going anywhere any time soon. Or the people who actually made react or pydantic.

brokegrammer•6mo ago
There's a certain elitism in programming where people will claim that library authors are superior to programmers who can only use those libraries, which is devoid of logic. Surprising, because programmers excel at logic.

Either way, I've been using AI to help me stop relying on external packages and frameworks lately. Turns out, AI can write libraries too because libraries and end products are simply code.

AI will replace all of us eventually, no matter how good you think you are. When guns got invented, Samurai went jobless but blacksmiths still had jobs. Fast forward to 2025, I don't think anyone is considering a career in blacksmithing.

brokegrammer•6mo ago
I never found coding to be fun. All I care about is shipping features. However, I do care about maintainability and user experience. That's why I use AI to write much of the "dumb" code, and then I'll spend time doing quality control.

I always hated writing HTML and CSS, so I have AI do that. I always found writing tests and type annotations to be tedious, and would usually avoid those at my own detriment. Now AI is able to do those.

AI in general has been a net positive for me. I hope that it keeps getting better and better so that I can stop coding for good and focus on the products I'm building themselves.