frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Do your own writing

https://alexhwoods.com/dont-let-ai-write-for-you/
128•karimf•7h ago

Comments

PaulRobinson•1h ago
Outsource things that aren't valuable to you and your core mission. Do the things that are valuable to you and your core mission.

This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.

I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.

Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.

When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.

Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.

So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.

Aurornis•1h ago
> Outsource things that aren't valuable to you and your core mission.

When you outsource the generation and thinking, you're also outsourcing the self-review that comes along with evaluating your own output.

In the office, that review step gets outsourced to your coworkers.

Having a coworker who ChatGPT generates slides, design docs, or PRs is terrible because you realize that their primary input is prompting Claude and then sending the output to other people to review. I could have done that myself. Reviewing their Claude or ChatGPT output so they can prompt Claude or ChatGPT to fix it is just a way to get me to do their work for them.

Aurornis•1h ago
> When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.

This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.

The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.

CharlesW•1h ago
The title and of this article is Don't Let AI Write For You, when its point seems to be closer to Don't Let AI Think For You (see "Thinking").

This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.

Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4

This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.

From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.

lokar•20m ago
I would count direct dictation (eg someone writes down what you say, and that is the final text), as writing, in the context of producing a document (book, etc) that you intend others to read.

It's not the same thing as talking to someone (or a group) about something.

keithnz•16m ago
I'm finding AI great to have a conversation with to flesh out ideas, with the added benefit it can summarize everything at the end
tines•12m ago
You're being steered without being aware of it.
whiplash451•5m ago
Worse. You’re being steered along a circle
TrianguloY•1h ago
> Letting an LLM write for you is like paying somebody to work out for you.

This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok.

But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.

fraywing•1h ago
>Letting an LLM write for you is like paying somebody to work out for you.

It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique).

With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_.

That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0]

[0] https://time.com/7295195/ai-chatgpt-google-learning-school/

roadside_picnic•1h ago
I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.

For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.

Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.

I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).

janalsncm•57m ago
Well said. The most important part of writing is thinking. LLMs cannot do the thinking for you.

This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point?

Unless the idea is trivial, LLMs are probably just getting in the way.

fleebee•51m ago
You quote this:

> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?

Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten.

I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.

atmosx•46m ago
I take it ideas to mean “well scoped replies” like “list pro and con if this vs that got flow”. While someone might think of N issues the LLM might present another six out of which three or four don’t make sense but one or two do. Might be worth adding these in the document.
nerevarthelame•50m ago
I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas."

It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.

dummydummy1234•46m ago
I have found the one of the better use cases of llms to be a rubber duck.

Explaining a design, problem, etc and trying to find solutions is extremely useful.

I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.

monkaiju•29m ago
I always find folks bringing up rubber ducking as a thing LLMs are good at to be misguided. IMO, what defines rubber ducking as a concept is that it is just the developer explaining what their doing to themselves. Not to another person, and not to a thing pretending to be a person. If you have a "two way" or "conversational" debugging/designing experience it isnt rubber ducking, its just normal design/debugging.

The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.

dexwiz•21m ago
Sometimes people just need something else to tell them their ideas are valid. Validation is a core principle of therapeutic care. Procrastination is tightly linked to fear of a negative outcome. LLMs can help with both of these. They can validate ideas in the now which can help overcome some of that anxiety.

Unfortunately they can also validate some really bad ideas.

satvikpendem•18m ago
Sometimes I don't want creativity though, I'm just not familiar enough with the solution space and I use the LLM as a sort of gradient descent simulator to the right solution to my problem (the LLM which itself used gradient descent when trained, meta, I know). I am not looking for wholly new solutions, just one that fits the problem the best, just as one could Google that information but LLMs save even that searching time.
j45•39m ago
Asking the LLM better will return better than average and bland and mainstream results.
paulryanrogers•34m ago
How does one ask better? Does better vary per model?
contagiousflow•32m ago
Why would they return "better" results?
dgxyz•35m ago
All LLM output is always dry as fuck quite frankly. At all levels from ideas and concepts through to the actual copy. And that’s dotted with pure excrement.

I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.

If I offend anyone I will not be apologising for it.

paulpauper•32m ago
LLMs can come sometimes up with novel or non-obvious insights...or just regurgitate google-like results.
NewsaHackO•28m ago
Yes, I didn't get this portion at all. I feel as though letting an LLM brainstorm ideas for you would be worse in externally framing your thoughts than letting it write/proofread for you. If you pick one idea out of the 10 presented by the LLM, you are still confining yourself to the intersection of what the LLM thinks is important and what you think is important, because then you can never "generate" a thought that the LLM hasn't presented.
furyofantares•23m ago
I'm torn.

I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.

Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?

I guess I must feel it's slightly useful overall as I still do it.

bluepeter•48m ago
Nowadays my writing (and maybe all of ours) has totally devolved into "prompt-ese." Much like days of yore where we all approached Google searches with acrobatic language knowing how to specifically get something done.

Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.

add-sub-mul-div•28m ago
> Nowadays my writing (and maybe all of ours)

No. Don't pretend your taking shortcuts is less questionable because everyone else is doing it too. We're not. Own it yourself, don't get me involved.

> I am able to be so much more effective by sheer volume of words

If you think value comes from volume of words you really need to understand writing better.

bluepeter•9m ago
Ok, but 3 generations ago, shorthand was a core skill that any competent professional could read and extract MORE value from than laboriously typeset prose. Something similar is probably happening now with prompt-ese and human-to-human (vs just AI) writing.
paulpauper•48m ago
Letting an LLM write for you is like paying somebody to work out for you.

Nah nearly anyone can go to the gym and get some benefit from it. You don't need to be that skilled to get a personal trainer or just run/walk on a treadmill. Even mice and hamsters can do it.

Writing is way harder for a lot of people and does not in anyway come naturally to most, unlike, say, the ability to move or to speak. Writing requires precision, organization, syntax, grammar etc. which are distinct from the 'idea generation' process. Some people will find they cannot articulate their ideas well in long form and need an LLM, or that the output is so bad that outsourcing the writing to an LLM makes much more sense and saves tons of time. If the alternative for many people is "no writing" vs "LLM assisted writing" the latter may be better.

This is why there can be a disconnect between having a correct "big idea" or otherwise being directionally right, but not being able to articulate it well in written form, as we see with the likes of Elon Musk (his sometimes cringy tweets) or Richard Branson (dyslexia).

People who are eloquent or even just competent at writing way overestimate how common the ability to write well is.

sikiri_app•6m ago
real fact and it is an interesting point.
gbro3n•48m ago
I fully agree with the sentiment of the article. I will say that I feel I've had some success in having an LLM outline a document, provided that I then go through and read / edit thoroughly. I think there's even an argument that this a) possibly catches areas you I have forgotten to write about, and b) hooks into my critique mode which feels more motivated than author mode sometimes (I'm slightly ashamed to say). This does come at the cost however of not putting my self in 'researcher' mode, where I go back through the system I'm writing about and follow the threads, reacquainting myself and judging my previous decisions.
firefoxd•30m ago
I'm 100% an advocate for not using LLM for writing... But I'll tell you were I use them just for that. For ceremonies.

A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records.

I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.

windowliker•30m ago
>Don't Let AI Write For You

>Essay structured like LLM output

Hmmm...

drnick1•27m ago
> They are particularly good at generating ideas.

I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.

jonathaneunice•24m ago
Agree with the underlying point: "don't let an LLM do your thinking, or interfere with processes essential to you thinking things clearly through."

My own experience, however, is that the best models are quite good and helping you with those writing and thinking processes. Finding gaps, exposing contradictions or weaknesses in your hypotheses or specifications, and suggesting related or supporting content that you might have included if you'd thought of it, but you didn't.

While I'm a developer and engineer now, I was a professional author, editor, and publisher in a former life. Would have _killed_ for the fast, often excellent feedback and acceleration that LLMs now provide. And while sure, I often have to "no, no, no!" or delete-delete, "redraft this and do it this way," the overall process is faster and the outcomes better with AI assistance.

The most important thing is to keep overall control of the tone, flow, and arguments. Every word need not be your own, at least in most forms of commercial and practical writing. True whether your collaborators are human, mecha, or some mix.

D13Fd•3m ago
> LLMs are useful for research and checking your work.

I have to disagree that it's good for LLMs to do the research, depending on the context.

If by "useful for research" you mean useful for tracking down sources that you, as the writer, digest and consider, then great.

If by "useful for research" you mean that it will fill in your citations for you, that's terrible. That sends a false signal to readers about the credibility of your work. It's critical that the author read and digest the things they are citing to.

bboynton97•3m ago
There's a lot of ways to use an LLM, the least effective is automating an entire process- yet it's the most compelling.

To your point, it's entirely a balance. I personally will record a 10-15 minute yap session on a concept I want to share and feed it to an agent to distill it into a series of observations and more compelling concepts. Then you can use this to write your piece.

Fedware: Government apps that spy harder than the apps they ban

https://www.sambent.com/the-white-house-app-has-huawei-spyware-and-an-ice-tip-line/
126•speckx•2h ago•29 comments

Do your own writing

https://alexhwoods.com/dont-let-ai-write-for-you/
132•karimf•7h ago•40 comments

How to turn anything into a router

https://nbailey.ca/post/router/
470•yabones•6h ago•174 comments

Turning a MacBook into a touchscreen with $1 of hardware (2018)

https://anishathalye.com/macbook-touchscreen/
24•HughParry•54m ago•2 comments

Bird brains (2023)

https://www.dhanishsemar.com/writing/bird-brains
258•DiffTheEnder•7h ago•165 comments

Cherri – programming language that compiles to an Apple Shortuct

https://github.com/electrikmilk/cherri
145•mihau•2d ago•24 comments

William Blake, Remote by the Sea

https://www.laphamsquarterly.org/roundtable/william-blake-remote-sea
19•occurrence•1h ago•0 comments

CodingFont: A game to help you pick a coding font

https://www.codingfont.com/
216•nvahalik•5h ago•126 comments

Seeing Like a Spreadsheet

https://davidoks.blog/p/how-the-spreadsheet-reshaped-america
25•paulpauper•2d ago•1 comments

OCR for construction documents does not work, we fixed it

https://www.getanchorgrid.com/developer/docs/endpoints/drawings-doors
71•wcisco17•4h ago•47 comments

Vulnerability Research Is Cooked

https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/
22•pedro84•1h ago•1 comments

Build123d: A Python CAD programming library

https://github.com/gumyr/build123d
86•Ivoah•23h ago•37 comments

A sea of sparks: Seeing radioactivity

https://maurycyz.com/projects/spinthariscope/
28•maurycyz•1h ago•11 comments

In math, rigor is vital, but are digitized proofs taking it too far?

https://www.quantamagazine.org/in-math-rigor-is-vital-but-are-digitized-proofs-taking-it-too-far-...
71•isaacfrond•4d ago•54 comments

What Construction at a Train Station Taught Me About Software Engineering

https://engineering.leanix.net/blog/engineering/
7•vinhnx•3d ago•1 comments

Take better notes, by hand

https://brianschrader.com/archive/take-better-notes-by-hand/
126•sonicrocketman•3h ago•57 comments

Mathematical methods and human thought in the age of AI

https://arxiv.org/abs/2603.26524
172•zaikunzhang•9h ago•68 comments

Recover Apple Keychain

https://arkoinad.com/posts/apple_keychain_recovery.html
16•speckx•2h ago•0 comments

Roulette Computers: Hidden Devices That Predict Spins

https://www.roulette-computers.com/
4•o4c•2d ago•0 comments

FTC action against Match and OkCupid for deceiving users, sharing personal data

https://www.ftc.gov/news-events/news/press-releases/2026/03/ftc-takes-action-against-match-okcupi...
182•gnabgib•4h ago•94 comments

The curious case of retro demo scene graphics

https://www.datagubbe.se/aipixels/
323•zdw•14h ago•82 comments

Show HN: Coasts – Containerized Hosts for Agents

https://github.com/coast-guard/coasts
30•jsunderland323•4h ago•9 comments

Proactively Parasocial

https://nicklandolfi.com/posts/proactively-parasocial.html
22•jxmorris12•4d ago•3 comments

I am definitely missing the pre-AI writing era

https://www.lesswrong.com/posts/BJ4pnropWdnzzgeJc/i-am-definitely-missing-the-pre-ai-writing-era
187•joozio•13h ago•166 comments

I use Excalidraw to manage my diagrams for my blog

https://blog.lysk.tech/excalidraw-frame-export/
242•mlysk•13h ago•99 comments

Fibonacci's Composed Fractions

https://ztoz.blog/posts/fibonacci-fractions/
9•aebtebeten•3d ago•2 comments

New Washington state law bans noncompete agreements

https://www.seattletimes.com/business/local-business/new-washington-law-bans-noncompete-agreements/
236•toomuchtodo•3h ago•93 comments

An NSFW filter for Marginalia search

https://www.marginalia.nu/log/a_134_nsfw/
52•speckx•4h ago•10 comments

The ladder is missing rungs – Engineering Progression When AI Ate the Middle

https://negroniventurestudios.com/2026/03/19/the-ladder-is-missing-rungs/
41•sorenvrist•6h ago•5 comments

Hamilton-Jacobi-Bellman Equation: Reinforcement Learning and Diffusion Models

https://dani2442.github.io/posts/continuous-rl/
136•sebzuddas•12h ago•41 comments