frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A letter to those who fired tech writers because of AI

https://passo.uno/letter-those-who-fired-tech-writers-ai/
57•theletterf•3h ago

Comments

6stringmerc•1h ago
No! No! I want all the companies to go all in on AI and completely destroy the last of the professional respect given to writers.

Why?

Because the legal catastrophe that will follow will entertain me so very very much.

bregma•10m ago
I'll bring the popcorn.
aurareturn•1h ago
But you might not need 5 tech writers anymore. Just 1 who controls an LLM.
theletterf•1h ago
Perhaps. Could the same be said for engineers?
amelius•1h ago
Yes. But they are now called managers.
ap99•54m ago
Yes and no.

Five engineers could be turned into maybe two, but probably not less.

It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.

If both leave then you're screwed.

If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.

marginalia_nu•49m ago
This was true even before LLMs. Development has always scaled very poorly with team size. A team of 20 heads is like at most twice as productive as a team of 5, and a team of 5 is marginally more productive than a team of 3.

Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.

aurareturn•52m ago
Yes. That could be said for engineers as well.

If the business can no longer justify 5 engineers, then they might only have 1.

I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.

IE:

2022: 100 companies employ 10,000 engineers

2026: 1000 companies employ 10,000 engineers

The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.

matwood•50m ago
That assumes your backlog is finite.

Is the tech writers backlog also seemingly infinite like every tech backlog I've ever seen?

DeborahWrites•39m ago
Yes. Yes it is.
raincole•42m ago
We have been seeing this happen in real time in the past two years, no?
ekidd•27m ago
Yes. I have been building software and acting as tech lead for close to 30 years.

I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.

Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.

But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.

Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.

murderfs•1h ago
I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.
saagarjha•59m ago
Not everyone wants to write code.
murderfs•39m ago
Yeah, but almost everyone wants money. You can see this by looking at what projects have the best documentation: they're all things like the man-pages project where the contributors aren't doing it as a job when they could be working a more profitable profession instead.
saagarjha•17m ago
While I do appreciate man pages, I don't think they are something I would consider to be "the best documentation". Many of the authors of them are engineers, by the way.
DeborahWrites•40m ago
You sound unlucky in your tech writer encounters!

There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.

Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.

sehugg•1h ago
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
NitpickLawyer•36m ago
Meh. A bit too touchy feely for my taste, and not much in ways of good arguments. Some of the things touched on in the article are either extreme romanticisations of the craft or rather naive takes (docs are product truth? Really?!?! That hasn't been the case in ages, with docs for multi-billion dollar solutions, written by highly paid grass fed you won't believe they're not humans!)...

The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)

What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.

PlatoIsADisease•10m ago
I love AI and use it daily, but I still run into hallucinations, even in COT/Thinking. I don't think hallucinations are as bad as people make it out to be. But I've been using AI since GPT3, so I'm hyper aware.
ainiriand•32m ago
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.

I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.

In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.

jraph•26m ago
> but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.

Yep, and reading you will feel less boring.

The uniform style of LLMs gets old fast and I wouldn't be surprised if it were a fundamental flaw due to how they work.

And it's not even sure speed gains from using LLMs make up for the skill loss in the long term.

duskdozer•21m ago
Seriously. It wasn't always this way, but now as soon I notice the LLM-isms in a chunk of text, I can feel my brain shut off.
jraph•17m ago
You're absolutely right! It's not just the brain shutoffs—it's the feeling of death inside.
elcapitan•5m ago
Scanning for LLM garbage is now one of the first things I do when reading a larger piece of text that has been published post ChatGPT.
DeborahWrites•31m ago
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.

Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)

In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.

I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.

Nextgrid•10m ago
> it won't be a GOOD replacement

See my other comment - I'm afraid quality only matters if there is healthy competition which isn't the case for many verticals: https://news.ycombinator.com/item?id=46631038

InMice•28m ago
Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.
drob518•27m ago
The best tech writers I have worked with don’t merely document the product. They actually act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
falcor84•14m ago
> I don’t see AI doing either of those things well.

I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.

Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.

nicbou•24m ago
I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.

I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.

Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.

I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.

People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.

PlatoIsADisease•13m ago
>insulting

As as writer, you know this makes it seem emotional rather than factual?

Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).

Nextgrid•12m ago
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.

You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.

Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".

Raspberry Pi's New AI Hat Adds 8GB of RAM for Local LLMs

https://www.jeffgeerling.com/blog/2026/raspberry-pi-ai-hat-2/
94•ingve•3h ago•60 comments

The URL shortener that makes your links look as suspicious as possible

https://creepylink.com/
417•dreadsword•8h ago•76 comments

A letter to those who fired tech writers because of AI

https://passo.uno/letter-those-who-fired-tech-writers-ai/
57•theletterf•3h ago•33 comments

Claude Cowork exfiltrates files

https://www.promptarmor.com/resources/claude-cowork-exfiltrates-files
694•takira•15h ago•310 comments

The <Geolocation> HTML Element

https://developer.chrome.com/blog/geolocation-html-element
64•enz•1d ago•39 comments

Photos Capture the Breathtaking Scale of China's Wind and Solar Buildout

https://e360.yale.edu/digest/china-renewable-photo-essay
73•mrtksn•1h ago•34 comments

Z80 Mem­ber­ship Card

https://sunrise-ev.com/z80.htm
24•exvi•3d ago•3 comments

Ask HN: How are you doing RAG locally?

179•tmaly•21h ago•67 comments

Ask HN: Share your personal website

639•susam•18h ago•1795 comments

New Safari developer tools provide insight into CSS Grid Lanes

https://webkit.org/blog/17746/new-safari-developer-tools-provide-insight-into-css-grid-lanes/
72•feross•11h ago•34 comments

Nao Labs (Open-Source Analytics Agent, YC X25) Is Hiring

https://www.ycombinator.com/companies/nao-labs/jobs/KjOBhf5-founding-software-engineer
1•ClaireGz•2h ago

Handy – Free open source speech-to-text app

https://github.com/cjpais/Handy
104•tin7in•6h ago•59 comments

Scaling long-running autonomous coding

https://cursor.com/blog/scaling-agents
210•samwillis•13h ago•127 comments

Bubblewrap: A nimble way to prevent agents from accessing your .env files

https://patrickmccanna.net/a-better-way-to-limit-claude-code-and-other-coding-agents-access-to-se...
107•0o_MrPatrick_o0•9h ago•81 comments

Ask HN: What did you find out or explore today?

106•blahaj•17h ago•164 comments

Furiosa: 3.5x efficiency over H100s

https://furiosa.ai/blog/introducing-rngd-server-efficient-ai-inference-at-data-center-scale
178•written-beyond•10h ago•117 comments

Project SkyWatch (a.k.a. Wescam at Home)

https://ianservin.com/2026/01/13/project-skywatch-aka-wescam-at-home/
60•jjwiseman•18h ago•13 comments

San Remo Pasta Measurer

https://www.toxel.com/tech/2025/09/17/san-remo-pasta-measurer/
8•surprisetalk•5d ago•0 comments

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR

https://www.tavus.io/post/sparrow-1-human-level-conversational-timing-in-real-time-voice
76•code_brian•17h ago•17 comments

The State of OpenSSL for pyca/cryptography

https://cryptography.io/en/latest/statements/state-of-openssl/
145•SGran•13h ago•27 comments

Show HN: WebTiles – create a tiny 250x250 website with neighbors around you

https://webtiles.kicya.net/
192•dimden•5d ago•29 comments

Crafting Interpreters

https://craftinginterpreters.com/
119•tosh•13h ago•18 comments

SparkFun Officially Dropping AdaFruit due to CoC Violation

https://www.sparkfun.com/official-response
470•yaleman•21h ago•470 comments

Find a pub that needs you

https://www.ismypubfucked.com/
309•thinkingemote•19h ago•261 comments

Sun Position Calculator

https://drajmarsh.bitbucket.io/earthsun.html
120•sanbor•14h ago•25 comments

Bare metal programming with RISC-V guide (2023)

https://popovicu.com/posts/bare-metal-programming-risc-v/
29•todsacerdoti•5d ago•6 comments

Show HN: Webctl – Browser automation for agents based on CLI instead of MCP

https://github.com/cosinusalpha/webctl
100•cosinusalpha•21h ago•32 comments

Generate QR Codes with Pure SQL in PostgreSQL

https://tanelpoder.com/posts/generate-qr-code-with-pure-sql-in-postgres/
93•tanelpoder•4d ago•8 comments

How can I build a simple pulse generator to demonstrate transmission lines

https://electronics.stackexchange.com/questions/764155/how-can-i-build-a-simple-pulse-generator-t...
54•alphabetter•6d ago•12 comments

Have Taken Up Farming

https://dylan.gr/1768295794
132•djnaraps•3h ago•84 comments